In this letter, we introduce GroundLoc, a LiDAR-only localization pipeline designed to localize a mobile robot in large-scale outdoor environments using prior maps. GroundLoc employs a Bird's-Eye View (BEV) image projection focusing on the perceived ground area and utilizes the place recognition network R2D2, or alternatively, the non-learning approach Scale-Invariant Feature Transform (SIFT), to identify and select keypoints for BEV image map registration. Our results demonstrate that GroundLoc outperforms state-of-the-art methods on the SemanticKITTI and HeLiPR datasets across various sensors. In the multi-session localization evaluation, GroundLoc reaches an Average Trajectory Error (ATE) well below 50 cm on all Ouster OS2 128 sequences while meeting online runtime requirements. The system supports various sensor models, as evidenced by evaluations conducted with Velodyne HDL-64E, Ouster OS2 128, Aeva Aeries II, and Livox Avia sensors. The prior maps are stored as 2D raster image maps, which can be created from a single drive and require only 4 MB of storage per square kilometer. The source code is available at https://github.com/dcmlr/groundloc.
翻译:本文提出GroundLoc,一种纯激光雷达定位流程,旨在利用先验地图在大规模户外环境中对移动机器人进行定位。GroundLoc采用鸟瞰图投影聚焦感知地面区域,并利用地点识别网络R2D2或非学习方法尺度不变特征变换从BEV图像中提取关键点进行地图配准。实验结果表明,在SemanticKITTI和HeLiPR数据集的多传感器测试中,GroundLoc性能优于现有先进方法。在多会话定位评估中,该系统在满足实时性要求的同时,在所有Ouster OS2 128序列上实现了低于50厘米的平均轨迹误差。通过Velodyne HDL-64E、Ouster OS2 128、Aeva Aeries II和Livox Avia等传感器的验证,本系统支持多型号传感器适配。先验地图以二维栅格图像形式存储,单次采集即可构建,每平方公里仅需4MB存储空间。源代码已开源:https://github.com/dcmlr/groundloc。