Visual localization on standard-definition (SD) maps has emerged as a promising low-cost and scalable solution for autonomous driving. However, existing regression-based approaches often overlook inherent geometric priors, resulting in suboptimal training efficiency and limited localization accuracy. In this paper, we propose a novel homography-guided pose estimator network for fine-grained visual localization between multi-view images and standard-definition (SD) maps. We construct input pairs that satisfy a homography constraint by projecting ground-view features into the BEV domain and enforcing semantic alignment with map features. Then we leverage homography relationships to guide feature fusion and restrict the pose outputs to a valid feasible region, which significantly improves training efficiency and localization accuracy compared to prior methods relying on attention-based fusion and direct 3-DoF pose regression. To the best of our knowledge, this is the first work to unify BEV semantic reasoning with homography learning for image-to-map localization. Furthermore, by explicitly modeling homography transformations, the proposed framework naturally supports cross-resolution inputs, enhancing model flexibility. Extensive experiments on the nuScenes dataset demonstrate that our approach significantly outperforms existing state-of-the-art visual localization methods. Code and pretrained models will be publicly released to foster future research.
翻译:基于标准清晰度(SD)地图的视觉定位已成为一种具有前景的低成本、可扩展自动驾驶解决方案。然而,现有基于回归的方法往往忽略固有的几何先验,导致训练效率欠佳且定位精度受限。本文提出一种新颖的单应性引导姿态估计网络,用于多视角图像与标准清晰度(SD)地图间的细粒度视觉定位。我们通过将地面视角特征投影至鸟瞰图(BEV)域并与地图特征进行语义对齐,构建满足单应性约束的输入对。随后利用单应性关系引导特征融合,并将姿态输出限制在有效可行区域内,相较于依赖注意力融合与直接三自由度姿态回归的现有方法,显著提升了训练效率与定位精度。据我们所知,这是首个将BEV语义推理与单应性学习相统一用于图像到地图定位的研究。此外,通过显式建模单应性变换,所提框架天然支持跨分辨率输入,增强了模型灵活性。在nuScenes数据集上的大量实验表明,我们的方法显著优于现有最先进的视觉定位方法。代码与预训练模型将公开发布以促进未来研究。