In this paper, we propose a fast extrinsic calibration method for fusing multiple inertial measurement units (MIMU) to improve visual-inertial odometry (VIO) localization accuracy. Currently, data fusion algorithms for MIMU highly depend on the number of inertial sensors. Based on the assumption that extrinsic parameters between inertial sensors are perfectly calibrated, the fusion algorithm provides better localization accuracy with more IMUs, while neglecting the effect of extrinsic calibration error. Our method builds two non-linear least-squares problems to estimate the MIMU relative position and orientation separately, independent of external sensors and inertial noises online estimation. Then we give the general form of the virtual IMU (VIMU) method and propose its propagation on manifold. We perform our method on datasets, our self-made sensor board, and board with different IMUs, validating the superiority of our method over competing methods concerning speed, accuracy, and robustness. In the simulation experiment, we show that only fusing two IMUs with our calibration method to predict motion can rival nine IMUs. Real-world experiments demonstrate better localization accuracy of the VIO integrated with our calibration method and VIMU propagation on manifold.
翻译:本文提出一种快速外参标定方法,用于融合多惯性测量单元以提升视觉惯性里程计的定位精度。当前多惯性测量单元的数据融合算法高度依赖惯性传感器数量。在惯性传感器间外参被完美标定的假设下,融合算法随惯性测量单元数量增加而获得更优定位精度,但忽视了外参标定误差的影响。本方法构建两个非线性最小二乘问题,分别独立于外部传感器与在线惯性噪声估计,解算多惯性测量单元的相对位置与姿态。继而给出虚拟惯性测量单元方法的通用形式,并提出其在流形上的传播模型。我们在数据集、自制传感器板及搭载不同惯性测量单元的电路板上实施本方法,验证了其在速度、精度与鲁棒性方面相较于对比方法的优越性。仿真实验表明,仅融合两个经本方法标定的惯性测量单元进行运动预测,即可媲美九个惯性测量单元的融合效果。真实场景实验证明,结合本标定方法与流形上虚拟惯性测量单元传播模型的视觉惯性里程计系统具有更优的定位精度。