Accurate LiDAR-camera calibration is crucial for multi-sensor systems. However, traditional methods often rely on physical targets, which are impractical for real-world deployment. Moreover, even carefully calibrated extrinsics can degrade over time due to sensor drift or external disturbances, necessitating periodic recalibration. To address these challenges, we present a Targetless LiDAR-Camera Calibration (TLC-Calib) that jointly optimizes sensor poses with a neural Gaussian-based scene representation. Reliable LiDAR points are frozen as anchor Gaussians to preserve global structure, while auxiliary Gaussians prevent local overfitting under noisy initialization. Our fully differentiable pipeline with photometric and geometric regularization achieves robust and generalizable calibration, consistently outperforming existing targetless methods on the KITTI-360, Waymo, and Fast-LIVO2 datasets. In addition, it yields more consistent Novel View Synthesis results, reflecting improved extrinsic alignment. The project page is available at: https://www.haebeom.com/tlc-calib-site/.
翻译:精确的LiDAR-相机标定对于多传感器系统至关重要。然而,传统方法通常依赖物理标定物,这在实际部署中并不实用。此外,即使经过精心标定的外参也可能因传感器漂移或外部干扰而随时间退化,需要定期重新标定。为解决这些挑战,我们提出了一种无目标LiDAR-相机标定方法(TLC-Calib),该方法通过基于神经高斯的场景表示联合优化传感器位姿。可靠的LiDAR点被冻结为锚定高斯以保持全局结构,而辅助高斯则可在噪声初始化下防止局部过拟合。我们提出的全可微流程结合了光度与几何正则化,实现了鲁棒且可泛化的标定,在KITTI-360、Waymo和Fast-LIVO2数据集上持续优于现有的无目标方法。此外,该方法能产生更一致的新视角合成结果,反映了外参对齐效果的提升。项目页面位于:https://www.haebeom.com/tlc-calib-site/。