This paper presents a fully unsupervised deep change detection approach for mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to define a closed set of semantic classes. Instead, semantic segmentation is reformulated as binary change detection. We develop a neural network, RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to detect scene changes with respect to the map. Using a novel loss function, existing point-cloud semantic segmentation networks can be trained to perform change detection without any labels or assumptions about local semantics. We demonstrate the performance of this approach on data from challenging terrains; mean intersection over union (mIoU) scores range between 67.4% and 82.2% depending on the amount of environmental structure. This outperforms the geometric baseline used in all experiments. The neural network runs faster than 10Hz and is integrated into a robot's autonomy stack to allow safe navigation around obstacles that intersect the planned path. In addition, a novel method for the rapid automated acquisition of per-point ground-truth labels is described. Covering changed parts of the scene with retroreflective materials and applying a threshold filter to the intensity channel of the LiDAR allows for quantitative evaluation of the change detector.
翻译:本文提出了一种面向搭载3D激光雷达移动机器人的全无监督深度变化检测方法。在非结构化环境中,定义封闭的语义类别集不可行,因此将语义分割重新构建为二元变化检测问题。我们开发了一个名为RangeNetCD的神经网络,该网络利用现有点云地图和实时激光雷达扫描,检测相对于地图的场景变化。通过新颖的损失函数,现有语义分割网络可在无需任何标签或局部语义假设的情况下,训练执行变化检测任务。我们在具有挑战性的地形数据上展示了该方法的性能:平均交并比(mIoU)得分根据环境结构程度在67.4%至82.2%之间波动,性能优于所有实验中使用的几何基线方法。该神经网络运行速度超过10Hz,并已集成到机器人自主堆栈中,可确保在规划路径上存在障碍物时实现安全导航。此外,本文还描述了一种快速自动获取逐点真实标签的新方法:用逆反射材料覆盖场景中的变化区域,并对激光雷达强度通道应用阈值滤波器,从而实现对变化检测器的定量评估。