Vision-Language Navigation (VLN) requires embodied agents to interpret natural language instructions and navigate through complex continuous 3D environments. However, the dominant imitation learning paradigm suffers from exposure bias, where minor deviations during inference lead to compounding errors. While DAgger-style approaches attempt to mitigate this by correcting error states, we identify a critical limitation: Instruction-State Misalignment. Forcing an agent to learn recovery actions from off-track states often creates supervision signals that semantically conflict with the original instruction. In response to these challenges, we introduce BudVLN, an online framework that learns from on-policy rollouts by constructing supervision to match the current state distribution. BudVLN performs retrospective rectification via counterfactual re-anchoring and decision-conditioned supervision synthesis, using a geodesic oracle to synthesize corrective trajectories that originate from valid historical states, ensuring semantic consistency. Experiments on the standard R2R-CE and RxR-CE benchmarks demonstrate that BudVLN consistently mitigates distribution shift and achieves state-of-the-art performance in both Success Rate and SPL.
翻译:视觉语言导航(VLN)要求具身智能体理解自然语言指令并在复杂的连续三维环境中导航。然而,主流的模仿学习范式存在暴露偏差问题,即推理过程中的微小偏离会导致误差累积。尽管DAgger类方法试图通过纠正错误状态来缓解此问题,但我们发现一个关键局限:指令-状态失配。强制智能体从偏离轨道的状态学习恢复动作,常会产生与原指令语义冲突的监督信号。针对这些挑战,我们提出BudVLN——一种通过构建与当前状态分布匹配的监督信号来从在线策略轨迹中学习的框架。BudVLN通过反事实重锚定与决策条件监督合成实现追溯式修正,利用测地线预言机合成源自有效历史状态的矫正轨迹,从而确保语义一致性。在标准R2R-CE和RxR-CE基准上的实验表明,BudVLN能持续缓解分布偏移,并在成功率(Success Rate)和路径长度加权成功率(SPL)上均达到最优性能。