In recent years, transformer-based architectures become the de facto standard for sequence modeling in deep learning frameworks. Inspired by the successful examples, we propose a causal visual-inertial fusion transformer (VIFT) for pose estimation in deep visual-inertial odometry. This study aims to improve pose estimation accuracy by leveraging the attention mechanisms in transformers, which better utilize historical data compared to the recurrent neural network (RNN) based methods seen in recent methods. Transformers typically require large-scale data for training. To address this issue, we utilize inductive biases for deep VIO networks. Since latent visual-inertial feature vectors encompass essential information for pose estimation, we employ transformers to refine pose estimates by updating latent vectors temporally. Our study also examines the impact of data imbalance and rotation learning methods in supervised end-to-end learning of visual inertial odometry by utilizing specialized gradients in backpropagation for the elements of SE$(3)$ group. The proposed method is end-to-end trainable and requires only a monocular camera and IMU during inference. Experimental results demonstrate that VIFT increases the accuracy of monocular VIO networks, achieving state-of-the-art results when compared to previous methods on the KITTI dataset. The code will be made available at https://github.com/ybkurt/VIFT.
翻译:近年来,基于Transformer的架构已成为深度学习框架中序列建模的事实标准。受成功案例启发,我们提出一种用于深度视觉惯性里程计中姿态估计的因果视觉-惯性融合Transformer(VIFT)。本研究旨在通过利用Transformer中的注意力机制提升姿态估计精度,该机制相较于近期方法中基于循环神经网络(RNN)的方法能更有效地利用历史数据。Transformer通常需要大规模数据进行训练。为解决此问题,我们为深度视觉惯性里程计网络引入归纳偏置。由于潜在视觉-惯性特征向量包含姿态估计的关键信息,我们采用Transformer通过时序更新潜在向量来优化姿态估计。本研究还通过为SE$(3)$群元素设计反向传播中的专用梯度,探讨了数据不平衡及旋转学习方法在视觉惯性里程计监督式端到端学习中的影响。所提方法支持端到端训练,且推断阶段仅需单目相机与惯性测量单元。实验结果表明,VIFT提升了单目视觉惯性里程计网络的精度,在KITTI数据集上相比现有方法取得了最先进的结果。代码将在https://github.com/ybkurt/VIFT公开。