Robustness is a crucial factor for the successful deployment of robots in unstructured environments, particularly in the domain of Simultaneous Localization and Mapping (SLAM). Simulation-based benchmarks have emerged as a highly scalable approach for robustness evaluation compared to real-world data collection. However, crafting a challenging and controllable noisy world with diverse perturbations remains relatively under-explored. To this end, we propose a novel, customizable pipeline for noisy data synthesis, aimed at assessing the resilience of multi-modal SLAM models against various perturbations. This pipeline incorporates customizable hardware setups, software components, and perturbed environments. In particular, we introduce comprehensive perturbation taxonomy along with a perturbation composition toolbox, allowing the transformation of clean simulations into challenging noisy environments. Utilizing the pipeline, we instantiate the Robust-SLAM benchmark, which includes diverse perturbation types, to evaluate the risk tolerance of existing advanced multi-modal SLAM models. Our extensive analysis uncovers the susceptibilities of existing SLAM models to real-world disturbance, despite their demonstrated accuracy in standard benchmarks. Our perturbation synthesis toolbox, SLAM robustness evaluation pipeline, and Robust-SLAM benchmark will be made publicly available at https://github.com/Xiaohao-Xu/SLAM-under-Perturbation/.
翻译:鲁棒性是机器人在非结构化环境中成功部署的关键因素,尤其在同时定位与地图构建(SLAM)领域。相比真实世界数据采集,基于仿真的基准测试已成为一种高度可扩展的鲁棒性评估方法。然而,如何构建具有多样化扰动的可控噪声环境仍是一个相对未充分探索的问题。为此,我们提出了一种新颖的可定制噪声数据合成流程,旨在评估多模态SLAM模型对多种扰动的抵抗能力。该流程集成了可定制的硬件配置、软件组件及扰动环境。特别地,我们引入了全面的扰动分类体系及扰动组合工具箱,能够将纯净仿真环境转化为具有挑战性的噪声环境。基于该流程,我们构建了包含多种扰动类型的Robust-SLAM基准测试,用于评估现有先进多模态SLAM模型的风险容忍度。大量分析揭示了现有SLAM模型对真实世界干扰的脆弱性——尽管它们在标准基准测试中展现出较高精度。我们的扰动合成工具箱、SLAM鲁棒性评估流程及Robust-SLAM基准测试将在https://github.com/Xiaohao-Xu/SLAM-under-Perturbation/ 公开提供。