Event cameras, when combined with inertial sensors, show significant potential for motion estimation in challenging scenarios, such as high-speed maneuvers and low-light environments. There are many methods for producing such estimations, but most boil down to a synchronous discrete-time fusion problem. However, the asynchronous nature of event cameras and their unique fusion mechanism with inertial sensors remain underexplored. In this paper, we introduce a monocular event-inertial odometry method called AsynEIO, designed to fuse asynchronous event and inertial data within a unified Gaussian Process (GP) regression framework. Our approach incorporates an event-driven frontend that tracks feature trajectories directly from raw event streams at a high temporal resolution. These tracked feature trajectories, along with various inertial factors, are integrated into the same GP regression framework to enable asynchronous fusion. With deriving analytical residual Jacobians and noise models, our method constructs a factor graph that is iteratively optimized and pruned using a sliding-window optimizer. Comparative assessments highlight the performance of different inertial fusion strategies, suggesting optimal choices for varying conditions. Experimental results on both public datasets and our own event-inertial sequences indicate that AsynEIO outperforms existing methods, especially in high-speed and low-illumination scenarios.
翻译:事件相机与惯性传感器结合,在高速机动和低光照等挑战性场景中展现出运动估计的巨大潜力。现有方法虽多,但大多可归结为同步离散时间融合问题。然而,事件相机的异步特性及其与惯性传感器的独特融合机制仍未得到充分探索。本文提出一种名为AsynEIO的单目事件-惯性里程计方法,旨在统一的高斯过程回归框架内融合异步事件与惯性数据。该方法包含一个事件驱动的前端,能够以高时间分辨率直接从原始事件流中跟踪特征轨迹。这些跟踪到的特征轨迹与多种惯性因子被整合至同一高斯过程回归框架中,以实现异步融合。通过推导解析残差雅可比矩阵与噪声模型,本方法构建了一个因子图,并采用滑动窗口优化器进行迭代优化与剪枝。对比评估揭示了不同惯性融合策略的性能表现,为不同条件提供了优化选择。在公开数据集及自采集事件-惯性序列上的实验结果表明,AsynEIO性能优于现有方法,尤其在高速与低光照场景中表现突出。