An elusive goal in navigation research is to build an intelligent agent that can understand multimodal instructions including natural language and image, and perform useful navigation. To achieve this, we study a widely useful category of navigation tasks we call Multimodal Instruction Navigation with demonstration Tours (MINT), in which the environment prior is provided through a previously recorded demonstration video. Recent advances in Vision Language Models (VLMs) have shown a promising path in achieving this goal as it demonstrates capabilities in perceiving and reasoning about multimodal inputs. However, VLMs are typically trained to predict textual output and it is an open research question about how to best utilize them in navigation. To solve MINT, we present Mobility VLA, a hierarchical Vision-Language-Action (VLA) navigation policy that combines the environment understanding and common sense reasoning power of long-context VLMs and a robust low-level navigation policy based on topological graphs. The high-level policy consists of a long-context VLM that takes the demonstration tour video and the multimodal user instruction as input to find the goal frame in the tour video. Next, a low-level policy uses the goal frame and an offline constructed topological graph to generate robot actions at every timestep. We evaluated Mobility VLA in a 836m^2 real world environment and show that Mobility VLA has a high end-to-end success rates on previously unsolved multimodal instructions such as "Where should I return this?" while holding a plastic bin. A video demonstrating Mobility VLA can be found here: https://youtu.be/-Tof__Q8_5s
翻译:导航研究中的一个核心目标是构建能够理解包含自然语言与图像的多模态指令,并执行有效导航的智能体。为实现这一目标,我们研究了一类具有广泛实用性的导航任务——基于演示路径的多模态指令导航(MINT),其环境先验信息通过预先录制的演示视频提供。视觉语言模型(VLMs)的最新进展为实现这一目标展现了可行路径,因其在感知与推理多模态输入方面展现出强大能力。然而,VLMs通常被训练用于预测文本输出,如何最优地将其应用于导航仍是一个开放的研究问题。为解决MINT任务,我们提出移动性VLA——一种分层式的视觉-语言-动作导航策略,它融合了长上下文VLMs的环境理解与常识推理能力,以及基于拓扑图的鲁棒底层导航策略。高层策略采用长上下文VLM,以演示路径视频和多模态用户指令作为输入,在路径视频中定位目标帧。随后,底层策略利用目标帧与离线构建的拓扑图,在每一时间步生成机器人动作。我们在836平方米的真实世界环境中评估了移动性VLA,结果表明该模型在以往未能解决的多模态指令(例如手持塑料箱时询问“我应该把这个放回哪里?”)上实现了较高的端到端成功率。展示移动性VLA的视频可在此处查看:https://youtu.be/-Tof__Q8_5s