Mutual localization serves as the foundation for collaborative perception and task assignment in multi-robot systems. Effectively utilizing limited onboard sensors for mutual localization between marker-less robots is a worthwhile goal. However, due to inadequate consideration of large scale variations of the observed robot and localization refinement, previous work has shown limited accuracy when robots are equipped only with RGB cameras. To enhance the precision of localization, this paper proposes a novel rendezvous-based hierarchical architecture for mutual localization (RHAML). Firstly, to learn multi-scale robot features, anisotropic convolutions are introduced into the network, yielding initial localization results. Then, the iterative refinement module with rendering is employed to adjust the observed robot poses. Finally, the pose graph is conducted to globally optimize all localization results, which takes into account multi-frame observations. Therefore, a flexible architecture is provided that allows for the selection of appropriate modules based on requirements. Simulations demonstrate that RHAML effectively addresses the problem of multi-robot mutual localization, achieving translation errors below 2 cm and rotation errors below 0.5 degrees when robots exhibit 5 m of depth variation. Moreover, its practical utility is validated by applying it to map fusion when multi-robots explore unknown environments.
翻译:相互定位是多机器人系统中协同感知与任务分配的基础。有效利用有限的机载传感器实现无标记机器人间的相互定位是一个值得追求的目标。然而,由于对观测机器人尺度变化及定位精化考虑不足,现有方法在仅配备RGB摄像头的机器人上精度有限。为提升定位精度,本文提出一种新颖的基于会合的层次化相互定位架构(RHAML)。首先,为学习多尺度机器人特征,网络引入各向异性卷积,获得初始定位结果;其次,采用带渲染的迭代精化模块调整观测机器人位姿;最后,通过位姿图对多帧观测进行全局优化,综合考量多帧观测信息。由此构建出灵活架构,可根据需求选择相应模块。仿真实验表明,RHAML有效解决了多机器人相互定位问题,当机器人深度变化达5米时,平移误差低于2厘米、旋转误差低于0.5度。此外,通过将其应用于多机器人未知环境探索中的地图融合,验证了其实用性。