Visual perception and navigation have emerged as major focus areas in the field of embodied artificial intelligence. We consider the task of image-goal navigation, where an agent is tasked to navigate to a goal specified by an image, relying only on images from an onboard camera. This task is particularly challenging since it demands robust scene understanding, goal-oriented planning and long-horizon navigation. Most existing approaches typically learn navigation policies reliant on recurrent neural networks trained via online reinforcement learning. However, training such policies requires substantial computational resources and time, and performance of these models is not reliable on long-horizon navigation. In this work, we present a generative Transformer based model that jointly models image goals, camera observations and the robot's past actions to predict future actions. We use state-of-the-art perception models and navigation policies to learn robust goal conditioned policies without the need for real-time interaction with the environment. Our model demonstrates capability in capturing and associating visual information across long time horizons, helping in effective navigation. NOTE: This work was submitted as part of a Master's Capstone Project and must be treated as such. This is still an early work in progress and not the final version.
翻译:视觉感知与导航已成为具身人工智能领域的主要研究方向。本文研究图像目标导航任务,即智能体仅依赖机载摄像头获取的图像,导航至由图像指定的目标位置。该任务极具挑战性,因其需要鲁棒的场景理解、目标导向的路径规划以及长时程导航能力。现有方法通常通过在线强化学习训练基于循环神经网络的导航策略,但此类策略的训练需要大量计算资源和时间,且在长时程导航中表现不稳定。本研究提出一种基于生成式Transformer的模型,该模型联合建模图像目标、摄像头观测数据与机器人历史动作以预测未来动作。我们采用最先进的感知模型与导航策略,无需实时环境交互即可学习鲁棒的目标条件策略。实验表明,该模型能够有效捕捉并关联长时程视觉信息,从而提升导航效率。注:本研究成果作为硕士顶点项目提交,应视为阶段性成果。当前版本仍为早期工作进展,非最终版本。