Accurate sensor-to-vehicle calibration is essential for safe autonomous driving. Angular misalignments of LiDAR sensors can lead to safety-critical issues during autonomous operation. However, current methods primarily focus on correcting sensor-to-sensor errors without considering the miscalibration of individual sensors that cause these errors in the first place. We introduce FlowCalib, the first framework that detects LiDAR-to-vehicle miscalibration using motion cues from the scene flow of static objects. Our approach leverages the systematic bias induced by rotational misalignment in the flow field generated from sequential 3D point clouds, eliminating the need for additional sensors. The architecture integrates a neural scene flow prior for flow estimation and incorporates a dual-branch detection network that fuses learned global flow features with handcrafted geometric descriptors. These combined representations allow the system to perform two complementary binary classification tasks: a global binary decision indicating whether misalignment is present and separate, axis-specific binary decisions indicating whether each rotational axis is misaligned. Experiments on the nuScenes dataset demonstrate FlowCalib's ability to robustly detect miscalibration, establishing a benchmark for sensor-to-vehicle miscalibration detection.
翻译:精确的传感器-车辆标定对于安全自动驾驶至关重要。激光雷达传感器的角度失准可能导致自动驾驶过程中的关键安全问题。然而,现有方法主要集中于校正传感器间的误差,而未考虑引发这些误差的单个传感器自身标定偏差。本文提出FlowCalib,首个利用静态物体场景流运动特征检测激光雷达-车辆标定偏差的框架。该方法通过旋转失准在连续三维点云生成的流场中引入的系统性偏差实现检测,无需额外传感器。该架构集成神经场景流先验进行流估计,并结合融合学习全局流特征与手工几何描述符的双分支检测网络。这种组合表征使系统能够执行两项互补的二分类任务:判断是否存在失准的全局二值决策,以及判断各旋转轴是否失准的独立轴向二值决策。在nuScenes数据集上的实验证明FlowCalib能够稳健检测标定偏差,为传感器-车辆标定偏差检测建立了基准。