Map-free relocalization technology is crucial for applications in autonomous navigation and augmented reality, but relying on pre-built maps is often impractical. It faces significant challenges due to limitations in matching methods and the inherent lack of scale in monocular images. These issues lead to substantial rotational and metric errors and even localization failures in real-world scenarios. Large matching errors significantly impact the overall relocalization process, affecting both rotational and translational accuracy. Due to the inherent limitations of the camera itself, recovering the metric scale from a single image is crucial, as this significantly impacts the translation error. To address these challenges, we propose a map-free relocalization method enhanced by instance knowledge and depth knowledge. By leveraging instance-based matching information to improve global matching results, our method significantly reduces the possibility of mismatching across different objects. The robustness of instance knowledge across the scene helps the feature point matching model focus on relevant regions and enhance matching accuracy. Additionally, we use estimated metric depth from a single image to reduce metric errors and improve scale recovery accuracy. By integrating methods dedicated to mitigating large translational and rotational errors, our approach demonstrates superior performance in map-free relocalization techniques.
翻译:无需地图的重定位技术在自主导航与增强现实应用中至关重要,但依赖预先构建的地图通常不切实际。由于匹配方法的局限性以及单目图像固有的尺度缺失,该技术面临重大挑战。这些问题导致实际场景中出现显著的旋转误差与度量误差,甚至造成定位失败。较大的匹配误差严重影响整体重定位过程,同时影响旋转精度与平移精度。由于相机自身的固有局限,从单幅图像恢复度量尺度至关重要,这直接影响平移误差的精度。为应对这些挑战,我们提出一种基于实例知识与深度知识增强的无需地图重定位方法。通过利用基于实例的匹配信息改进全局匹配结果,我们的方法显著降低了不同物体间误匹配的可能性。实例知识在场景中的鲁棒性有助于特征点匹配模型聚焦相关区域并提升匹配精度。此外,我们使用单幅图像估计的度量深度来减少度量误差并提高尺度恢复精度。通过整合专门用于降低大尺度平移误差与旋转误差的方法,我们的方案在无需地图重定位技术中展现出卓越性能。