We present a novel target-based lidar-camera extrinsic calibration methodology that can be used for non-overlapping field of view (FOV) sensors. Contrary to previous work, our methodology overcomes the non-overlapping FOV challenge using a motion capture system (MCS) instead of traditional simultaneous localization and mapping approaches. Due to the high relative precision of MCSs, our methodology can achieve both the high accuracy and repeatable calibrations common to traditional target-based methods, regardless of the amount of overlap in the sensors' field of view. Furthermore, we design a target-agnostic implementation that does not require uniquely identifiable features by using an iterative closest point approach, enabled by the MSC measurements. We show using simulation that we can accurately recover extrinsic calibrations for a range of perturbations to the true calibration that would be expected in real circumstances. We prove experimentally that our method out-performs state-of-the-art lidar-camera extrinsic calibration methods that can be used for non-overlapping FOV systems, while using a target-based approach that guarantees repeatably high accuracy. Lastly, we show in simulation that different target designs can be used, including easily constructed 3D targets such as a cylinder that are normally considered degenerate in most calibration formulations.
翻译:本文提出了一种新颖的基于标定物的激光雷达-相机外参标定方法,适用于视场(FOV)不重叠的传感器系统。与现有方法不同,本方法采用运动捕捉系统(MCS)替代传统的同步定位与建图方法,克服了传感器视场不重叠带来的挑战。得益于运动捕捉系统的高相对精度,无论传感器视场重叠程度如何,本方法均能实现传统基于标定物方法常见的高精度与可重复标定。此外,我们设计了一种与标定物无关的实现方案:通过运动捕捉系统的测量数据,采用迭代最近点方法,无需依赖唯一可识别的特征点。仿真实验表明,对于真实场景中可能出现的各类外参扰动,本方法均能准确恢复标定参数。实验证明,在适用于非重叠视场系统的激光雷达-相机外参标定方法中,本方法性能优于现有最优方法,同时通过基于标定物的方案保证了可重复的高精度标定。最后,仿真结果表明本方法可采用多种标定物设计,包括易于构建的三维标定物(如圆柱体)——这类目标在大多数标定模型中通常被视为退化情形。