Localization and mapping of an environment are crucial tasks for any robot operating in unstructured environments. Time-of-flight (ToF) sensors (e.g.,~lidar) have proven useful in mobile robotics, where high-resolution sensors can be used for simultaneous localization and mapping. In soft and continuum robotics, however, these high-resolution sensors are too large for practical use. This, combined with the deformable nature of such robots, has resulted in continuum robot (CR) localization and mapping in unstructured environments being a largely untouched area. In this work, we present a localization technique for CRs that relies on small, low-resolution ToF sensors distributed along the length of the robot. By fusing measurement information with a robot shape prior, we show that accurate localization is possible despite each sensor experiencing frequent degenerate scenarios. We achieve an average localization error of 2.5cm in position and 7.2° in rotation across all experimental conditions with a 53cm long robot. We demonstrate that the results are repeated across multiple environments, in both simulation and real-world experiments, and study robustness in the estimation to deviations in the prior map.
翻译:环境定位与建图是任何在非结构化环境中运行的机器人所需完成的关键任务。飞行时间传感器(例如激光雷达)已在移动机器人领域证明其有效性,其中高分辨率传感器可用于同步定位与建图。然而,在软体与连续体机器人领域,此类高分辨率传感器因体积过大而难以实际应用。这一限制,加之此类机器人固有的可变形特性,使得连续体机器人在非结构化环境中的定位与建图研究仍处于近乎空白的领域。本研究提出一种适用于连续体机器人的定位技术,该技术依赖于沿机器人长度方向分布式部署的小型低分辨率飞行时间传感器。通过将测量信息与机器人形状先验知识相融合,我们证明即使每个传感器频繁遭遇退化场景,仍能实现精确的定位。在53厘米长的机器人平台上,所有实验条件下的平均定位误差达到位置2.5厘米、旋转7.2°。我们通过仿真与真实环境实验,验证了该结果在不同环境中的可重复性,并研究了估计过程对先验地图偏差的鲁棒性。