Event-based visual odometry is a specific branch of visual Simultaneous Localization and Mapping (SLAM) techniques, which aims at solving tracking and mapping sub-problems in parallel by exploiting the special working principles of neuromorphic (ie, event-based) cameras. Due to the motion-dependent nature of event data, explicit data association ie, feature matching under large-baseline view-point changes is hardly established, making direct methods a more rational choice. However, state-of-the-art direct methods are limited by the high computational complexity of the mapping sub-problem and the degeneracy of camera pose tracking in certain degrees of freedom (DoF) in rotation. In this paper, we resolve these issues by building an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry. Specifically, to speed up the mapping operation, we propose an efficient strategy for sampling contour points according to the local dynamics of events. The mapping performance is also improved in terms of structure completeness and local smoothness by merging the temporal stereo and static stereo results. To circumvent the degeneracy of camera pose tracking in recovering the pitch and yaw components of general six-DoF motion, we introduce IMU measurements as motion priors via pre-integration. To this end, a compact back-end is proposed for continuously updating the IMU bias and predicting the linear velocity, enabling an accurate motion prediction for camera pose tracking. The resulting system scales well with modern high-resolution event cameras and leads to better global positioning accuracy in large-scale outdoor environments. Extensive evaluations on five publicly available datasets featuring different resolutions and scenarios justify the superior performance of the proposed system against five state-of-the-art methods.
翻译:基于事件的视觉里程计是视觉同步定位与建图(SLUAM)技术的一个特定分支,其目标是通过利用神经形态(即基于事件)相机的特殊工作原理,并行解决跟踪与建图子问题。由于事件数据具有运动依赖特性,显式数据关联(即大基线视角变化下的特征匹配)难以建立,使得直接方法成为更合理的选择。然而,现有最先进的直接方法受限于建图子问题的高计算复杂度,以及相机位姿跟踪在旋转的某些自由度(DoF)上的退化问题。本文通过在我们先前提出的直接流程"基于事件的立体视觉里程计"基础上构建一个基于事件的立体视觉-惯性里程计系统来解决这些问题。具体而言,为加速建图操作,我们提出了一种根据事件局部动态特性高效采样轮廓点的策略。通过融合时域立体与静态立体结果,建图性能在结构完整性和局部平滑度方面也得到提升。为避免相机位姿跟踪在恢复一般六自由度运动中俯仰和偏航分量时的退化问题,我们通过预积分引入IMU测量作为运动先验。为此,提出了一个紧凑的后端模块,用于持续更新IMU偏置并预测线速度,从而为相机位姿跟踪提供精确的运动预测。所构建的系统能良好适配现代高分辨率事件相机,并在大规模室外环境中实现更优的全局定位精度。在五个具有不同分辨率与场景特征的公开数据集上的广泛评估表明,本系统相较于五种最先进方法具有显著性能优势。