Online high-definition (HD) map construction is an essential part of a safe and robust end-to-end autonomous driving (AD) pipeline. Onboard camera-based approaches suffer from limited depth perception and degraded accuracy due to occlusion. In this work, we propose SatMap, an online vectorized HD map estimation method that integrates satellite maps with multi-view camera observations and directly predicts a vectorized HD map for downstream prediction and planning modules. Our method leverages lane-level semantics and texture from satellite imagery captured from a Bird's Eye View (BEV) perspective as a global prior, effectively mitigating depth ambiguity and occlusion. In our experiments on the nuScenes dataset, SatMap achieves 34.8% mAP performance improvement over the camera-only baseline and 8.5% mAP improvement over the camera-LiDAR fusion baseline. Moreover, we evaluate our model in long-range and adverse weather conditions to demonstrate the advantages of using a satellite prior map. Source code will be available at https://iv.ee.hm.edu/satmap/.
翻译:在线高精地图构建是安全、鲁棒的端到端自动驾驶流程的重要组成部分。基于车载摄像头的方法因深度感知有限和遮挡导致的精度下降而受到影响。在本工作中,我们提出了SatMap,一种在线矢量化高精地图估计方法,它将卫星地图与多视角摄像头观测相结合,并直接为下游预测和规划模块预测矢量化高精地图。我们的方法利用从鸟瞰视角捕获的卫星图像中的车道级语义和纹理作为全局先验,有效缓解了深度模糊和遮挡问题。在nuScenes数据集上的实验中,SatMap相比纯摄像头基线实现了34.8% mAP的性能提升,相比摄像头-激光雷达融合基线实现了8.5% mAP的提升。此外,我们在长距离和恶劣天气条件下评估了我们的模型,以证明使用卫星先验地图的优势。源代码将在 https://iv.ee.hm.edu/satmap/ 提供。