This article presents a complete semantic scene understanding workflow using only a single 2D lidar. This fills the gap in 2D lidar semantic segmentation, thereby enabling the rethinking and enhancement of existing 2D lidar-based algorithms for application in various mobile robot tasks. It introduces the first publicly available 2D lidar semantic segmentation dataset and the first fine-grained semantic segmentation algorithm specifically designed for 2D lidar sensors on autonomous mobile robots. To annotate this dataset, we propose a novel semi-automatic semantic labeling framework that requires minimal human effort and provides point-level semantic annotations. The data was collected by three different types of 2D lidar sensors across twelve indoor environments, featuring a range of common indoor objects. Furthermore, the proposed semantic segmentation algorithm fully exploits raw lidar information -- position, range, intensity, and incident angle -- to deliver stochastic, point-wise semantic segmentation. We present a series of semantic occupancy grid mapping experiments and demonstrate two semantically-aware navigation control policies based on 2D lidar. These results demonstrate that the proposed semantic 2D lidar dataset, semi-automatic labeling framework, and segmentation algorithm are effective and can enhance different components of the robotic navigation pipeline. Multimedia resources are available at: https://youtu.be/P1Hsvj6WUSY.
翻译:本文提出了一种仅使用单个二维激光雷达的完整语义场景理解工作流程。该研究填补了二维激光雷达语义分割领域的空白,从而推动对现有基于二维激光雷达算法的重新思考与增强,使其能应用于各类移动机器人任务。研究首次公开了二维激光雷达语义分割数据集,并提出了首个专为自主移动机器人二维激光雷达传感器设计的细粒度语义分割算法。为标注该数据集,我们提出了一种新颖的半自动语义标注框架,该框架仅需极少人工干预即可提供点级语义标注。数据通过三种不同类型的二维激光雷达传感器在十二种室内环境中采集,涵盖一系列常见室内物体。此外,所提出的语义分割算法充分利用原始激光雷达信息——位置、距离、强度和入射角——实现随机点级语义分割。我们展示了一系列语义占据栅格建图实验,并演示了两种基于二维激光雷达的语义感知导航控制策略。这些结果表明,所提出的语义二维激光雷达数据集、半自动标注框架及分割算法具有显著效能,能够增强机器人导航流程中的不同模块。多媒体资源详见:https://youtu.be/P1Hsvj6WUSY。