Self-localization on a 3D map by using an inexpensive monocular camera is required to realize autonomous driving. Self-localization based on a camera often uses a convolutional neural network (CNN) that can extract local features that are calculated by nearby pixels. However, when dynamic obstacles, such as people, are present, CNN does not work well. This study proposes a new method combining CNN with Vision Transformer, which excels at extracting global features that show the relationship of patches on whole image. Experimental results showed that, compared to the state-of-the-art method (SOTA), the accuracy improvement rate in a CG dataset with dynamic obstacles is 1.5 times higher than that without dynamic obstacles. Moreover, the self-localization error of our method is 20.1% smaller than that of SOTA on public datasets. Additionally, our robot using our method can localize itself with 7.51cm error on average, which is more accurate than SOTA.
翻译:为实现自动驾驶,需利用低成本单目相机在三维地图上进行自定位。基于相机的自定位通常采用卷积神经网络(CNN),该网络能够提取由邻近像素计算的局部特征。然而,当存在行人等动态障碍物时,CNN 性能不佳。本研究提出一种新方法,将 CNN 与擅长提取全局特征(反映图像整体分块间关系)的 Vision Transformer 相结合。实验结果表明:在含动态障碍物的计算机生成数据集上,相较于前沿方法(SOTA),本方法的精度提升率是无动态障碍物场景的 1.5 倍;在公开数据集上,本方法的自定位误差较 SOTA 降低 20.1%;此外,搭载本方法的机器人平均自定位误差为 7.51 厘米,精度优于 SOTA。