The bioinspired event camera, distinguished by its exceptional temporal resolution, high dynamic range, and low power consumption, has been extensively studied in recent years for motion estimation, robotic perception, and object detection. In ego-motion estimation, the visual-inertial setup is commonly adopted due to complementary characteristics between sensors (e.g., scale perception and low drift). For optimal event-based visual-inertial fusion, accurate spatiotemporal (extrinsic and temporal) calibration is required. In this work, we present eKalibr-Inertial, an accurate spatiotemporal calibrator for event-based visual-inertial systems, utilizing the widely used circle grid board. Building upon the grid pattern recognition and tracking methods in eKalibr and eKalibr-Stereo, the proposed method starts with a rigorous and efficient initialization, where all parameters in the estimator would be accurately recovered. Subsequently, a continuous-time-based batch optimization is conducted to refine the initialized parameters toward better states. The results of extensive real-world experiments show that eKalibr-Inertial can achieve accurate event-based visual-inertial spatiotemporal calibration. The implementation of eKalibr-Inertial is open-sourced at (https://github.com/Unsigned-Long/eKalibr) to benefit the research community.
翻译:受生物启发的事件相机以其卓越的时间分辨率、高动态范围和低功耗等特性,近年来在运动估计、机器人感知和目标检测等领域得到了广泛研究。在自运动估计中,由于传感器之间具有互补特性(例如尺度感知和低漂移),视觉-惯性组合系统被普遍采用。为实现最优的基于事件的视觉-惯性融合,需要精确的时空(外参和时间)标定。本文提出eKalibr-Inertial,一种基于事件相机与惯性测量单元(IMU)系统的精确时空标定方法,该方法采用广泛使用的圆形标定板。基于eKalibr和eKalibr-Stereo中的网格模式识别与跟踪方法,所提方法首先进行严格且高效的初始化,从而精确恢复估计器中的所有参数。随后,执行基于连续时间的批量优化,以将初始化的参数优化至更优状态。大量真实场景实验结果表明,eKalibr-Inertial能够实现精确的基于事件的视觉-惯性时空标定。eKalibr-Inertial的实现已在(https://github.com/Unsigned-Long/eKalibr)开源,以惠及研究社区。