Accurate and robust navigation in unstructured environments requires fusing data from multiple sensors. Such fusion ensures that the robot is better aware of its surroundings, including areas of the environment that are not immediately visible but were visible at a different time. To solve this problem, we propose a method for traversability prediction in challenging outdoor environments using a sequence of RGB and depth images fused with pose estimations. Our method, termed WayFASTER (Waypoints-Free Autonomous System for Traversability with Enhanced Robustness), uses experience data recorded from a receding horizon estimator to train a self-supervised neural network for traversability prediction, eliminating the need for heuristics. Our experiments demonstrate that our method excels at avoiding obstacles, and correctly detects that traversable terrains, such as tall grass, can be navigable. By using a sequence of images, WayFASTER significantly enhances the robot's awareness of its surroundings, enabling it to predict the traversability of terrains that are not immediately visible. This enhanced awareness contributes to better navigation performance in environments where such predictive capabilities are essential.
翻译:在非结构化环境中实现精确鲁棒的导航需要融合多传感器数据。这种融合确保机器人能更全面地感知周围环境,包括那些当前不可见但曾在不同时间点可见的区域。针对此问题,我们提出一种利用RGB与深度图像序列结合位姿估计的方法,用于具有挑战性的户外环境可通行性预测。本方法名为WayFASTER(具有增强鲁棒性的无路点自主可通行性系统),采用来自退缩地平线估计器的经验数据训练自监督神经网络进行可通行性预测,从而消除对启发式规则的依赖。实验表明,该方法在避障方面表现优异,并准确检测出高草丛等可通行地形具备穿越能力。通过使用图像序列,WayFASTER显著增强机器人的环境感知能力,使其能够预测当前不可见地形的可通行性。这种增强的感知能力在需要预测能力的导航环境中提升了整体导航性能。