The increasing demand for autonomous vehicles has created a need for robust navigation systems that can also operate effectively in adverse weather conditions. Visual odometry is a technique used in these navigation systems, enabling the estimation of vehicle position and motion using input from onboard cameras. However, visual odometry accuracy can be significantly impacted in challenging weather conditions, such as heavy rain, snow, or fog. In this paper, we evaluate a range of visual odometry methods, including our DROID-SLAM based heuristic approach. Specifically, these algorithms are tested on both clear and rainy weather urban driving data to evaluate their robustness. We compiled a dataset comprising of a range of rainy weather conditions from different cities. This includes, the Oxford Robotcar dataset from Oxford, the 4Seasons dataset from Munich and an internal dataset collected in Singapore. We evaluated different visual odometry algorithms for both monocular and stereo camera setups using the Absolute Trajectory Error (ATE). From the range of approaches evaluated, our findings suggest that the Depth and Flow for Visual Odometry (DF-VO) algorithm with monocular setup performed the best for short range distances (< 500m) and our proposed DROID-SLAM based heuristic approach for the stereo setup performed relatively well for long-term localization. Both VO algorithms suggested a need for a more robust sensor fusion based approach for localization in rain.
翻译:随着自动驾驶车辆需求的日益增长,对能够在恶劣天气条件下有效运行的鲁棒导航系统的需求也随之产生。视觉里程计是这些导航系统中采用的一项技术,能够利用车载摄像头的输入来估计车辆的位置与运动。然而,在暴雨、大雪或浓雾等恶劣天气条件下,视觉里程计的精度可能会受到显著影响。本文评估了一系列视觉里程计方法,包括我们基于DROID-SLAM的启发式方法。具体而言,这些算法在晴天和雨天的城市驾驶数据上进行了测试,以评估其鲁棒性。我们整理了一个包含来自不同城市多种雨天条件的数据集,其中包括牛津的Oxford Robotcar数据集、慕尼黑的4Seasons数据集以及在新加坡采集的内部数据集。我们使用绝对轨迹误差(ATE)评估了单目和立体相机设置下的不同视觉里程计算法。在所评估的方法中,我们的研究结果表明,在短距离(<500米)场景下,采用单目设置的深度与光流视觉里程计(DF-VO)算法表现最佳;而在长距离定位中,我们为立体设置提出的基于DROID-SLAM的启发式方法表现相对良好。两种视觉里程计算法均表明,在雨天环境下进行定位时,需要采用更鲁棒的基于传感器融合的方法。