Depth perception is considered an invaluable source of information in the context of 3D mapping and various robotics applications. However, point cloud maps acquired using consumer-level light detection and ranging sensors (lidars) still suffer from bias related to local surface properties such as measuring beam-to-surface incidence angle, distance, texture, reflectance, or illumination conditions. This fact has recently motivated researchers to exploit traditional filters, as well as the deep learning paradigm, in order to suppress the aforementioned depth sensors error while preserving geometric and map consistency details. Despite the effort, depth correction of lidar measurements is still an open challenge mainly due to the lack of clean 3D data that could be used as ground truth. In this paper, we introduce two novel point cloud map consistency losses, which facilitate self-supervised learning on real data of lidar depth correction models. Specifically, the models exploit multiple point cloud measurements of the same scene from different view-points in order to learn to reduce the bias based on the constructed map consistency signal. Complementary to the removal of the bias from the measurements, we demonstrate that the depth correction models help to reduce localization drift. Additionally, we release a data set that contains point cloud data captured in an indoor corridor environment with precise localization and ground truth mapping information.
翻译:深度感知在三维建图及各类机器人应用中被视为宝贵的信息源。然而,使用消费级光探测与测距传感器(激光雷达)获取的点云地图仍存在与局部表面特性相关的偏差,如测量光束与表面入射角、距离、纹理、反射率或光照条件等因素。这一现状近期促使研究者利用传统滤波器及深度学习范式来抑制上述深度传感器误差,同时保持几何与地图一致性细节。尽管已有诸多努力,激光雷达测量的深度校正仍是一个开放挑战,主要原因是缺乏可作为真值的洁净三维数据。本文提出两种新颖的点云地图一致性损失函数,它们通过自监督学习方式促进激光雷达深度校正模型在真实数据上的训练。具体而言,这些模型利用同一场景从不同视角获取的多帧点云测量数据,通过构建的地图一致性信号学习如何降低测量偏差。除消除测量偏差外,我们证明深度校正模型有助于降低定位漂移。此外,我们发布了一个包含室内走廊环境点云数据的数据集,该数据集提供精确的定位信息与真值建图数据。