Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent camera-LiDAR calibration as an optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV. To make our calibration method publicly accessible, we release the code on our project website at http://calibration.cs.uni-freiburg.de.
翻译:机器人平台的传感器配置通常同时包含相机与激光雷达,因其能提供互补信息。然而,融合这两种模态通常需要二者之间高精度的标定。本文提出MDPCalib,这是一种新颖的相机-激光雷达标定方法,无需人工监督或任何特定目标物体。相反,我们利用来自视觉与激光雷达里程计的运动估计,以及基于深度学习的2D像素到3D点对应关系——这些对应关系的获取无需领域内重训练。我们将相机-激光雷达标定表述为一个优化问题,并通过最小化由传感器运动约束和点对应关系约束产生的代价函数来求解。在大量实验中,我们证明该方法能够获得高精度的外参标定结果,且对随机初始化具有鲁棒性。此外,该方法可泛化至多种传感器配置,我们通过在多个机器人平台(包括自动驾驶感知车辆、四足机器人和无人机)上的应用验证了这一点。为使我们的标定方法便于公开使用,相关代码已发布于项目网站:http://calibration.cs.uni-freiburg.de。