This paper quantifies an error source that limits the accuracy of lidar scan matching, particularly for voxel-based methods. Lidar scan matching, which is used in dead reckoning (also known as lidar odometry) and mapping, computes the rotation and translation that best align a pair of point clouds. Perspective errors occur when a scene is viewed from different angles, with different surfaces becoming visible or occluded from each viewpoint. To explain perspective anomalies observed in data, this paper models perspective errors for two objects representative of urban landscapes: a cylindrical column and a dual-wall corner. For each object, we provide an analytical model of the perspective error for voxel-based lidar scan matching. We then analyze how perspective errors accumulate as a lidar-equipped vehicle moves past these objects.
翻译:本文量化了一种限制激光雷达扫描匹配精度的误差源,尤其针对基于体素的方法。激光雷达扫描匹配用于航位推算(又称激光雷达里程计)和地图构建,通过计算旋转与平移来最佳对齐两组点云。当场景从不同视角观察时,不同表面会因视角变化而可见或被遮挡,从而产生视角误差。为解释数据中观测到的视角异常现象,本文针对城市景观中两种典型物体——圆柱形立柱和双壁拐角——建立了视角误差模型。针对每种物体,我们给出了基于体素的激光雷达扫描匹配中视角误差的解析模型,并进一步分析了当搭载激光雷达的车辆经过这些物体时,视角误差的累积规律。