文章目录
- 1. 背景介绍
- 2. 数据加载和预处理
- 3. 模型结构
- 4. Loss计算
- 5. 总结和讨论
1. 背景介绍
梳理了PersFormer 3D Lane这篇论文对应的开源代码。
2. 数据加载和预处理
数据组织方式参考:自动驾驶三维车道线检测系列—OpenLane数据集介绍。
坐标系参考:Waymo(OpenLane)的相机坐标系定义为:X轴向前,Y轴向左,Z轴向上。
所以,目前数据集中extrinsic如下所示:
"extrinsic": [
[
0.9999790376291948,
0.0045643569339703614,
-0.004592488211072773,
1.5393679168564702
],
[
-0.004561049760530192,
0.9999893316554864,
0.0007303425877435168,
-0.023691101834458727
],
[
0.004595772761080904,
-0.0007093807107760264,
0.9999891877674156,
2.116014704116658
],
[
0.0,
0.0,
0.0,
1.0
]
]
表示相机与车体坐标系之间的外参。证明当前相机坐标系与车体坐标系基本一样,只不过坐标系原点不同。
真值中三维车道线坐标如下:
"xyz": [
[
17.382800978819017,
17.50040078037587,
...
20.274071148194103,
20.38195255123586,
20.48338473286711,
...
21.931553979558124,
22.035070660270055,
22.225836627658275,
...
26.61997710857102,
26.79194824101369,
26.900410829029237,
27.120094997627227,
27.373557832034134,
27.544735503944892,
27.76517221715751,
...
34.04364108074635,
34.34183952795403,
34.44990326517201,
34.64902798842567,
34.80330435386684,
34.97738513466869,
35.10772807701519,
35.21127993338897,
...
43.26603227123105,
43.394400283321744,
43.54402499844422
],
[
2.106162267265404,
2.117356716888622,
2.1267196912979487,
2.134409778081357,
...
2.0836507579014345,
2.083507611187606,
2.0835073709153624,
2.0840386385354006,
2.084208834336426,
2.0839706956475896,
...
2.041385347080901,
2.0409709291451565,
2.040427413456658,
2.039615358123066,
...
2.0134828598759036,
2.0115237025717696,
2.008972154232112
],
[
-2.3179790375385325,
-2.3071957428128735,
...
-2.6387563474505975,
-2.6083380476462663,
-2.615133499516616,
-2.6078418903876996,
...
-2.8227286818859705,
-2.7188114534133323,
-2.7218183933456475,
-2.7215278064851747,
...
-2.8628242116626312,
-2.8972871689102906,
-2.9289600110699885
]
],
可以看出三维车道线真值的坐标系是在Camera坐标系之下的表示,也就是Z值在相机坐标系的地面处。
- 初始化
- 超参数设定
- H_crop计算
- H_ipm2g计算
- H_g2ipm计算
- Anchor grid计算
- Anchor dim计算
- 初始化OpenLane数据集
- 输入:数据集文件夹
- 输出:X, Y和Z的offset以及image anchors
- 算法内容:
- 获取所有的.json文件,即真值文件
- 调整外参,从Waymo车体坐标系变换到ground坐标系。此处重点解释下,Waymo相机坐标系的定义与正常相机坐标系定义不一样,Waymo是X轴朝前,Y轴朝左,Z轴向上。此时,车体坐标系也是如此定义的。所以,数据集中默认的外参旋转近似单位阵。而坐标系的原点不一样而已。而ground坐标系的定义是Y轴朝前,X轴朝右,Z轴向上。此外,三维车道线结果是在Waymo相机坐标系下的。
- 将三维车道线坐标从相机坐标系变换到ground坐标系
- 将三维车道线坐标从ground坐标系变换到flat ground坐标系下
- 计算offset作为预测回归的结果
- 模型构建
- 模型训练
- 数据加载
- 看起来模型训练数据加载时又做了很多上述数据初始化很多一样的步骤
- 加载图像数据,根据resize参数调整图像尺寸
- 对图像进行归一化处理
- 数据增强,图像旋转以及Anchor旋转
- 计算segmentation所需要的label,直接利用车道线三维真值投影回图像得到segmentation的label
- 在IPM上绘制三维车道线真值数据,得到Seg Bev数据,用于后续模型训练
最终,数据处理这层为下游提供的数据包含如下:
- 图像归一化后的数据
- Segmentation label
- Ground-truth lane (anchor)
- Ground-truth lane 2D
- Camera height
- Camera pitch
- 内参
- 外参(ground to camera)
- 数据增强矩阵
- Segmentation Bev map
3. 模型结构
模型整体框图如下图所示:
- Encoder
- Backbone选用ResNet,输出特征list,out_featList
- Neck选用一系列的二维卷积、BN和ReLU,输出特征list,neck_out
- ShareEncoder,FrontViewPathway,输出frontview_features
- 取frontview_features的最后一层特征,frontview_final_feat,执行Lane 2D的Attention,输出laneatt_proposals_list
- PerspectiveTransformer,输入包括原图、frontview_features和变换矩阵M
- 获取BEV中的2D网格
- 将BEV网格利用M矩阵,投影到IPM
- Transformer encoder: Self-attn -> norm -> cross-attn -> norm -> ffn -> norm
- Query是query_embed,BEV视角
- Value是frontview_features
- Bev 2D grid
- IPM 2D grid
- 首先BEV得self-attention
- 之后是BEV与IPM和frontview_features的cross attention
- 最后norm等操作输出feature
- 输出特征,projs
- Bev Head
- 输入:特征projs
- 输出:特征bev_feature
- 算法内容:
- SingleTopViewPathway
- SingleTopViewPathway
- 经过一系列卷积操作
- LanePredictionHead
- 输入:特征bev_feature
- 输出:处理后的特征bev_feature
- 算法内容:
- 一系列卷积操作
- 将张量 x 中特定的维度范围应用 Sigmoid 函数,以便将这些值的值域从任意实数映射到 (0, 1) 区间内。这通常用于处理模型输出中的可见性或置信度分数,使得这些分数可以被解释为概率。
- 模型最后输出:Laneatt_proposals_list, out, cam_height, cam_pitch, pred_seg_bev_map, uncertainty_loss
4. Loss计算
函数调用:
# 3D loss
loss_3d, loss_3d_dict = criterion(output_net, gt, pred_hcam, gt_hcam, pred_pitch, gt_pitch)
# Add lane attion loss
loss_att, loss_att_dict = model.module.laneatt_head.loss(laneatt_proposals_list, gt_laneline_img,
cls_loss_weight=args.cls_loss_weight,
reg_vis_loss_weight=args.reg_vis_loss_weight)
loss_seg = bceloss(pred_seg_bev_map, seg_bev_map)
# overall loss
loss = self.compute_loss(args, epoch, loss_3d, loss_att, loss_seg, uncertainty_loss, loss_3d_dict, loss_att_dict)
- 3D loss
class Laneline_loss_gflat_3D(nn.Module):
"""
Compute the loss between predicted lanelines and ground-truth laneline in anchor representation.
The anchor representation is in flat ground space X', Y' and real 3D Z. Visibility estimation is also included.
The X' Y' and Z estimation will be transformed to real X, Y to compare with ground truth. An additional loss in
X, Y space is expected to guide the learning of features to satisfy the geometry constraints between two spaces
loss = loss0 + loss1 + loss2 + loss2
loss0: cross entropy loss for lane point visibility
loss1: cross entropy loss for lane type classification
loss2: sum of geometric distance betwen 3D lane anchor points in X and Z offsets
loss3: error in estimating pitch and camera heights
"""
def __init__(self, batch_size, num_types, anchor_x_steps, anchor_y_steps, x_off_std, y_off_std, z_std, pred_cam=False, no_cuda=False):
super(Laneline_loss_gflat_3D, self).__init__()
self.batch_size = batch_size
self.num_types = num_types
self.num_x_steps = anchor_x_steps.shape[0]
self.num_y_steps = anchor_y_steps.shape[0]
self.anchor_dim = 3*self.num_y_steps + 1
self.pred_cam = pred_cam
# prepare broadcast anchor_x_tensor, anchor_y_tensor, std_X, std_Y, std_Z
tmp_zeros = torch.zeros(self.batch_size, self.num_x_steps, self.num_types, self.num_y_steps)
self.x_off_std = torch.tensor(x_off_std.astype(np.float32)).reshape(1, 1, 1, self.num_y_steps) + tmp_zeros
self.y_off_std = torch.tensor(y_off_std.astype(np.float32)).reshape(1, 1, 1, self.num_y_steps) + tmp_zeros
self.z_std = torch.tensor(z_std.astype(np.float32)).reshape(1, 1, 1, self.num_y_steps) + tmp_zeros
self.anchor_x_tensor = torch.tensor(anchor_x_steps.astype(np.float32)).reshape(1, self.num_x_steps, 1, 1) + tmp_zeros
self.anchor_y_tensor = torch.tensor(anchor_y_steps.astype(np.float32)).reshape(1, 1, 1, self.num_y_steps) + tmp_zeros
self.anchor_x_tensor = self.anchor_x_tensor/self.x_off_std
self.anchor_y_tensor = self.anchor_y_tensor/self.y_off_std
if not no_cuda:
self.z_std = self.z_std.cuda()
self.anchor_x_tensor = self.anchor_x_tensor.cuda()
self.anchor_y_tensor = self.anchor_y_tensor.cuda()
def forward(self, pred_3D_lanes, gt_3D_lanes, pred_hcam, gt_hcam, pred_pitch, gt_pitch):
"""
:param pred_3D_lanes: predicted tensor with size N x (ipm_w/8) x 3*(2*K+1)
:param gt_3D_lanes: ground-truth tensor with size N x (ipm_w/8) x 3*(2*K+1)
:param pred_pitch: predicted pitch with size N
:param gt_pitch: ground-truth pitch with size N
:param pred_hcam: predicted camera height with size N
:param gt_hcam: ground-truth camera height with size N
:return:
"""
sizes = pred_3D_lanes.shape
# reshape to N x ipm_w/8 x 3 x (3K+1)
pred_3D_lanes = pred_3D_lanes.reshape(sizes[0], sizes[1], self.num_types, self.anchor_dim)
gt_3D_lanes = gt_3D_lanes.reshape(sizes[0], sizes[1], self.num_types, self.anchor_dim)
# class prob N x ipm_w/8 x 3 x 1, anchor values N x ipm_w/8 x 3 x 2K, visibility N x ipm_w/8 x 3 x K
pred_class = pred_3D_lanes[:, :, :, -1].unsqueeze(-1)
pred_anchors = pred_3D_lanes[:, :, :, :2*self.num_y_steps]
pred_visibility = pred_3D_lanes[:, :, :, 2*self.num_y_steps:3*self.num_y_steps]
gt_class = gt_3D_lanes[:, :, :, -1].unsqueeze(-1)
gt_anchors = gt_3D_lanes[:, :, :, :2*self.num_y_steps]
gt_visibility = gt_3D_lanes[:, :, :, 2*self.num_y_steps:3*self.num_y_steps]
# cross-entropy loss for visibility
loss0 = -torch.sum(
gt_visibility*torch.log(pred_visibility + torch.tensor(1e-9)) +
(torch.ones_like(gt_visibility) - gt_visibility + torch.tensor(1e-9)) *
torch.log(torch.ones_like(pred_visibility) - pred_visibility + torch.tensor(1e-9)))/self.num_y_steps
# cross-entropy loss for lane probability
loss1 = -torch.sum(
gt_class*torch.log(pred_class + torch.tensor(1e-9)) +
(torch.ones_like(gt_class) - gt_class) *
torch.log(torch.ones_like(pred_class) - pred_class + torch.tensor(1e-9)))
# applying L1 norm does not need to separate X and Z
loss2 = torch.sum(
torch.norm(gt_class*torch.cat((gt_visibility, gt_visibility), 3)*(pred_anchors-gt_anchors), p=1, dim=3))
# compute loss in real 3D X, Y space, the transformation considers offset to anchor and normalization by std
pred_Xoff_g = pred_anchors[:, :, :, :self.num_y_steps]
pred_Z = pred_anchors[:, :, :, self.num_y_steps:2*self.num_y_steps]
gt_Xoff_g = gt_anchors[:, :, :, :self.num_y_steps]
gt_Z = gt_anchors[:, :, :, self.num_y_steps:2*self.num_y_steps]
pred_hcam = pred_hcam.reshape(self.batch_size, 1, 1, 1)
gt_hcam = gt_hcam.reshape(self.batch_size, 1, 1, 1)
pred_Xoff = (1 - pred_Z * self.z_std / pred_hcam) * pred_Xoff_g - pred_Z * self.z_std / pred_hcam * self.anchor_x_tensor
pred_Yoff = -pred_Z * self.z_std / pred_hcam * self.anchor_y_tensor
gt_Xoff = (1 - gt_Z * self.z_std / gt_hcam) * gt_Xoff_g - gt_Z * self.z_std / gt_hcam * self.anchor_x_tensor
gt_Yoff = -gt_Z * self.z_std / gt_hcam * self.anchor_y_tensor
loss3 = torch.sum(
torch.norm(
gt_class * torch.cat((gt_visibility, gt_visibility), 3) *
(torch.cat((pred_Xoff, pred_Yoff), 3) - torch.cat((gt_Xoff, gt_Yoff), 3)), p=1, dim=3))
if not self.pred_cam:
return loss0+loss1+loss2+loss3
loss4 = torch.sum(torch.abs(gt_pitch-pred_pitch)) + torch.sum(torch.abs(gt_hcam-pred_hcam))
return loss0+loss1+loss2+loss3+loss4
- 函数说明
- 输入:预测的3D结果pred_3D_lanes,真值gt_3D_lanes,预测的Camera高度pred_hcam,真值Camera高度gt_hcam,预测的pitch pred_pitch和真值的pitch gt_pitch
- 输出:3D loss
- 算法内容
- 计算visibility的loss
- 计算lane probability的loss
- 计算offset的3D loss
- 计算pitch loss
5. 总结和讨论
梳理了PersFormer 3D Lane这篇论文对应的开源代码。