Multi-sensor fusion is essential for autonomous vehicle localization, as it is capable of integrating data from various sources for enhanced accuracy and reliability. The accuracy of the integrated location and orientation depends on the precision of the uncertainty modeling. Traditional methods of uncertainty modeling typically assume a Gaussian distribution and involve manual heuristic parameter tuning. However, these methods struggle to scale effectively and address long-tail scenarios. To address these challenges, we propose a learning-based method that encodes sensor information using higher-order neural network features, thereby eliminating the need for uncertainty estimation. This method significantly eliminates the need for parameter fine-tuning by developing an end-to-end neural network that is specifically designed for multi-sensor fusion. In our experiments, we demonstrate the effectiveness of our approach in real-world autonomous driving scenarios. Results show that the proposed method outperforms existing multi-sensor fusion methods in terms of both accuracy and robustness. A video of the results can be viewed at https://youtu.be/q4iuobMbjME.
翻译:多传感器融合对于自动驾驶车辆定位至关重要,它能够整合来自不同来源的数据以提高精度与可靠性。融合位置与姿态的精度取决于不确定性建模的准确性。传统的不确定性建模方法通常假设高斯分布,并依赖人工启发式参数调优。然而,这些方法难以有效扩展并处理长尾场景。为应对这些挑战,我们提出一种基于学习的方法,该方法利用高阶神经网络特征对传感器信息进行编码,从而无需进行不确定性估计。通过专门为多传感器融合设计的端到端神经网络,该方法显著减少了对参数微调的需求。我们在实验中验证了该方法在真实世界自动驾驶场景中的有效性。结果表明,所提出的方法在精度与鲁棒性方面均优于现有的多传感器融合方法。实验结果视频可在 https://youtu.be/q4iuobMbjME 查看。