In autonomous systems, sensor calibration is essential for safe and efficient navigation in dynamic environments. Accurate calibration is a prerequisite for reliable perception and planning tasks such as object detection and obstacle avoidance. Many existing LiDAR calibration methods require overlapping fields of view, while others use external sensing devices or postulate a feature-rich environment. In addition, Sensor-to-Vehicle calibration is not supported by the vast majority of calibration algorithms. In this work, we propose a novel target-based technique for extrinsic Sensor-to-Sensor and Sensor-to-Vehicle calibration of multi-LiDAR systems called CaLiV. This algorithm works for non-overlapping fields of view and does not require any external sensing devices. First, we apply motion to produce field of view overlaps and utilize a simple Unscented Kalman Filter to obtain vehicle poses. Then, we use the Gaussian mixture model-based registration framework GMMCalib to align the point clouds in a common calibration frame. Finally, we reduce the task of recovering the sensor extrinsics to a minimization problem. We show that both translational and rotational Sensor-to-Sensor errors can be solved accurately by our method. In addition, all Sensor-to-Vehicle rotation angles can also be calibrated with high accuracy. We validate the simulation results in real-world experiments. The code is open-source and available on https://github.com/TUMFTM/CaLiV.
翻译:在自主系统中,传感器标定对于在动态环境中安全高效导航至关重要。精确标定是实现可靠感知与规划任务(如目标检测和避障)的前提条件。现有许多LiDAR标定方法需要视场重叠,另一些方法则依赖外部传感设备或假设环境特征丰富。此外,绝大多数标定算法不支持传感器到车辆的标定。本研究提出一种新型基于目标的标定技术CaLiV,用于多LiDAR系统的外参传感器间标定及传感器到车辆标定。该算法适用于非重叠视场,且无需任何外部传感设备。首先,通过运动产生视场重叠,并采用简易无迹卡尔曼滤波器获取车辆位姿。随后,使用基于高斯混合模型配准框架GMMCalib将点云对齐至公共标定坐标系。最后,将求解传感器外参的任务转化为最小化问题。实验表明,本方法能精确求解传感器间的平移与旋转误差。此外,所有传感器到车辆的旋转角度亦可实现高精度标定。我们通过真实场景实验验证了仿真结果。代码已开源,详见https://github.com/TUMFTM/CaLiV。