We propose a real-time dynamic LiDAR odometry pipeline for mobile robots in Urban Search and Rescue (USAR) scenarios. Existing approaches to dynamic object detection often rely on pretrained learned networks or computationally expensive volumetric maps. To enhance efficiency on computationally limited robots, we reuse data between the odometry and detection module. Utilizing a range image segmentation technique and a novel residual-based heuristic, our method distinguishes dynamic from static objects before integrating them into the point cloud map. The approach demonstrates robust object tracking and improved map accuracy in environments with numerous dynamic objects. Even highly non-rigid objects, such as running humans, are accurately detected at point level without prior downsampling of the point cloud and hence, without loss of information. Evaluation on simulated and real-world data validates its computational efficiency. Compared to a state-of-the-art volumetric method, our approach shows comparable detection performance at a fraction of the processing time, adding only 14 ms to the odometry module for dynamic object detection and tracking. The implementation and a new real-world dataset are available as open-source for further research.
翻译:我们提出了一种适用于城市搜救场景中移动机器人的实时动态激光雷达里程计框架。现有动态物体检测方法通常依赖于预训练学习网络或计算密集的体素地图。为提升计算资源受限机器人的运行效率,我们在里程计与检测模块间实现了数据复用。通过采用距离图像分割技术和基于残差的新颖启发式方法,本方法在将物体整合至点云地图前即可有效区分动态与静态目标。该方案在存在大量动态物体的环境中展现出鲁棒的物体追踪能力和更高的建图精度。即使对于奔跑行人等高非刚性物体,亦能在无需预先下采样点云(从而避免信息损失)的情况下实现点级精确检测。在仿真与真实场景数据上的评估验证了其计算效率。相较于先进的体素方法,本方案以仅14毫秒的额外处理时间(用于动态物体检测与追踪)实现了相当的检测性能,且处理耗时仅为前者极小部分。相关实现及新构建的真实场景数据集已开源以供进一步研究。