The aspiration of the Vision-and-Language Navigation (VLN) task has long been to develop an embodied agent with robust adaptability, capable of seamlessly transferring its navigation capabilities across various tasks. Despite remarkable advancements in recent years, most methods necessitate dataset-specific training, thereby lacking the capability to generalize across diverse datasets encompassing distinct types of instructions. Large language models (LLMs) have demonstrated exceptional reasoning and generalization abilities, exhibiting immense potential in robot action planning. In this paper, we propose FlexVLN, an innovative hierarchical approach to VLN that integrates the fundamental navigation ability of a supervised-learning-based Instruction Follower with the robust generalization ability of the LLM Planner, enabling effective generalization across diverse VLN datasets. Moreover, a verification mechanism and a multi-model integration mechanism are proposed to mitigate potential hallucinations by the LLM Planner and enhance execution accuracy of the Instruction Follower. We take REVERIE, SOON, and CVDN-target as out-of-domain datasets for assessing generalization ability. The generalization performance of FlexVLN surpasses that of all the previous methods to a large extent.
翻译:视觉与语言导航(VLN)任务的长期愿景是开发具有强大适应性的具身智能体,使其能够将导航能力无缝迁移至不同任务。尽管近年来取得了显著进展,但现有方法大多需要进行针对特定数据集的训练,因而缺乏在包含不同类型指令的多样化数据集间进行泛化的能力。大语言模型(LLM)已展现出卓越的推理与泛化能力,在机器人动作规划中显示出巨大潜力。本文提出FlexVLN,一种创新的分层式VLN方法,该方法将基于监督学习的指令跟随器的基础导航能力与LLM规划器的强大泛化能力相结合,从而实现在不同VLN数据集间的有效泛化。此外,我们提出了验证机制与多模型集成机制,以减轻LLM规划器可能产生的幻觉,并提升指令跟随器的执行准确率。我们选取REVERIE、SOON和CVDN-target作为评估泛化能力的域外数据集。FlexVLN的泛化性能在很大程度上超越了所有先前方法。