More and more research works fuse the LiDAR and camera information to improve the 3D object detection of the autonomous driving system. Recently, a simple yet effective fusion framework has achieved an excellent detection performance, fusing the LiDAR and camera features in a unified bird's-eye-view (BEV) space. In this paper, we propose a LiDAR-camera fusion framework, named SimpleBEV, for accurate 3D object detection, which follows the BEV-based fusion framework and improves the camera and LiDAR encoders, respectively. Specifically, we perform the camera-based depth estimation using a cascade network and rectify the depth results with the depth information derived from the LiDAR points. Meanwhile, an auxiliary branch that implements the 3D object detection using only the camera-BEV features is introduced to exploit the camera information during the training phase. Besides, we improve the LiDAR feature extractor by fusing the multi-scaled sparse convolutional features. Experimental results demonstrate the effectiveness of our proposed method. Our method achieves 77.6\% NDS accuracy on the nuScenes dataset, showcasing superior performance in the 3D object detection track.
翻译:越来越多的研究工作通过融合激光雷达与相机信息来提升自动驾驶系统的三维目标检测性能。近期,一种简洁高效的融合框架在统一鸟瞰图空间中融合激光雷达与相机特征,取得了优异的检测效果。本文提出一种名为SimpleBEV的激光雷达-相机融合框架,用于实现精确的三维目标检测。该框架遵循基于鸟瞰图的融合范式,并分别改进了相机编码器与激光雷达编码器。具体而言,我们采用级联网络进行基于相机的深度估计,并利用激光雷达点云衍生的深度信息对深度估计结果进行校正。同时,我们引入一个仅使用相机鸟瞰图特征实现三维目标检测的辅助分支,以在训练阶段充分挖掘相机信息。此外,我们通过融合多尺度稀疏卷积特征改进了激光雷达特征提取器。实验结果验证了所提方法的有效性。我们的方法在nuScenes数据集上取得了77.6%的NDS精度,在三维目标检测任务中展现出优越性能。