We are interested in long-term deployments of autonomous robots to aid astronauts with maintenance and monitoring operations in settings such as the International Space Station. Unfortunately, such environments tend to be highly dynamic and unstructured, and their frequent reconfiguration poses a challenge for robust long-term localization of robots. Many state-of-the-art visual feature-based localization algorithms are not robust towards spatial scene changes, and SLAM algorithms, while promising, cannot run within the low-compute budget available to space robots. To address this gap, we present a computationally efficient semantic masking approach for visual feature matching that improves the accuracy and robustness of visual localization systems during long-term deployment in changing environments. Our method introduces a lightweight check that enforces matches to be within long-term static objects and have consistent semantic classes. We evaluate this approach using both map-based relocalization and relative pose estimation and show that it improves Absolute Trajectory Error (ATE) and correct match ratios on the publicly available Astrobee dataset. While this approach was originally developed for microgravity robotic freeflyers, it can be applied to any visual feature matching pipeline to improve robustness.
翻译:我们致力于研究自主机器人的长期部署,以协助宇航员在国际空间站等环境中执行维护与监测任务。然而,此类环境通常具有高度动态性和非结构化特征,其频繁的布局调整对机器人实现鲁棒的长期定位构成了挑战。当前许多基于视觉特征的先进定位算法对空间场景变化的鲁棒性不足,而同时定位与地图构建(SLAM)算法虽前景广阔,却难以在空间机器人有限的计算资源下运行。为弥补这一不足,本文提出一种计算高效的语义掩码视觉特征匹配方法,旨在提升视觉定位系统在动态环境中长期部署的准确性与鲁棒性。该方法通过轻量级校验机制,强制要求匹配点位于长期静态物体内且具有一致的语义类别。我们通过基于地图的重定位与相对位姿估计两项任务对该方法进行评估,并在公开的Astrobee数据集上验证了其能有效降低绝对轨迹误差(ATE)并提升正确匹配率。尽管该方法最初为微重力机器人自由飞行器开发,但其可应用于任何视觉特征匹配流程以增强系统鲁棒性。