Loop closure is crucial for maintaining the accuracy and consistency of visual SLAM. We propose a method to improve loop closure performance in DPV-SLAM. Our approach integrates AnyLoc, a learning-based visual place recognition technique, as a replacement for the classical Bag of Visual Words (BoVW) loop detection method. In contrast to BoVW, which relies on handcrafted features, AnyLoc utilizes deep feature representations, enabling more robust image retrieval across diverse viewpoints and lighting conditions. Furthermore, we propose an adaptive mechanism that dynamically adjusts similarity threshold based on environmental conditions, removing the need for manual tuning. Experiments on both indoor and outdoor datasets demonstrate that our method significantly outperforms the original DPV-SLAM in terms of loop closure accuracy and robustness. The proposed method offers a practical and scalable solution for enhancing loop closure performance in modern SLAM systems.
翻译:闭环检测对于维持视觉SLAM的精度与一致性至关重要。本文提出一种改进DPV-SLAM闭环性能的方法。该方法将基于学习的视觉地点识别技术AnyLoc集成至系统,替代传统的词袋模型闭环检测方法。与依赖手工特征的词袋模型不同,AnyLoc利用深度特征表示,能够在多样化视角与光照条件下实现更鲁棒的图像检索。此外,我们提出一种自适应机制,可根据环境条件动态调整相似度阈值,无需人工调参。在室内外数据集上的实验表明,该方法在闭环精度与鲁棒性方面显著优于原始DPV-SLAM。所提出的方法为增强现代SLAM系统的闭环性能提供了实用且可扩展的解决方案。