Autonomous off-road navigation is required for applications in agriculture, construction, search and rescue and defence. Traditional on-road autonomous methods struggle with dynamic terrains, leading to poor vehicle control on off-road. Recent deep-learning models have used perception sensors along with kinesthetic feedback for navigation on such terrains. However, this approach has out-of-domain uncertainty. Factors like change in weather and time of day impacts the performance of the model. We propose a multi modal fusion network FuseIsPath capable of using LWIR and RGB images to provide robustness against dynamic weather and light conditions. To aid further works in this domain, we also open-source a day-night dataset with LWIR and RGB images along with pseudo-labels for traversability. In order to co-register the two images we developed a novel method for targetless extrinsic calibration of LWIR, LiDAR and RGB cameras with translation accuracy of 1.7cm and rotation accuracy of 0.827degree.
翻译:自主越野导航在农业、建筑、搜救和国防等领域具有重要应用需求。传统的道路自动驾驶方法难以应对动态地形,导致在越野环境中的车辆控制性能不佳。近期深度学习模型通过结合感知传感器与动觉反馈来实现此类地形导航,但该方法存在域外不确定性问题。天气变化与昼夜交替等因素会显著影响模型性能。本文提出一种多模态融合网络FuseIsPath,能够利用长波红外与可见光图像实现动态天气与光照条件下的鲁棒导航。为促进该领域研究,我们同时开源了一个包含长波红外与可见光图像的昼夜数据集,并提供可通行性伪标签。为实现双图像配准,我们开发了一种针对长波红外相机、激光雷达与可见光相机的无靶标外参标定新方法,其平移精度达1.7厘米,旋转精度达0.827度。