Vision-and-Language Navigation (VLN) tasks require an agent to follow textual instructions to navigate through 3D environments. Traditional approaches use supervised learning methods, relying heavily on domain-specific datasets to train VLN models. Recent methods try to utilize closed-source large language models (LLMs) like GPT-4 to solve VLN tasks in zero-shot manners, but face challenges related to expensive token costs and potential data breaches in real-world applications. In this work, we introduce Open-Nav, a novel study that explores open-source LLMs for zero-shot VLN in the continuous environment. Open-Nav employs a spatial-temporal chain-of-thought (CoT) reasoning approach to break down tasks into instruction comprehension, progress estimation, and decision-making. It enhances scene perceptions with fine-grained object and spatial knowledge to improve LLM's reasoning in navigation. Our extensive experiments in both simulated and real-world environments demonstrate that Open-Nav achieves competitive performance compared to using closed-source LLMs.
翻译:视觉语言导航任务要求智能体依据文本指令在三维环境中进行导航。传统方法采用监督学习策略,严重依赖特定领域数据集来训练VLN模型。近期研究尝试利用GPT-4等闭源大语言模型以零样本方式解决VLN任务,但在实际应用中面临高昂的token成本与潜在数据泄露的挑战。本研究提出Open-Nav——一项探索在连续环境中使用开源大语言模型进行零样本VLN的创新工作。Open-Nav采用时空思维链推理方法,将任务分解为指令理解、进度估计与决策制定三个模块,并通过细粒度物体与空间知识增强场景感知能力,从而提升大语言模型的导航推理性能。我们在仿真环境与真实场景中的大量实验表明,Open-Nav相较于使用闭源大语言模型的方案取得了具有竞争力的性能表现。