Vision-Language Navigation in Continuous Environments (VLN-CE) presents a core challenge: grounding high-level linguistic instructions into precise, safe, and long-horizon spatial actions. Explicit topological maps have proven to be a vital solution for providing robust spatial memory in such tasks. However, existing topological planning methods suffer from a "Granularity Rigidity" problem. Specifically, these methods typically rely on fixed geometric thresholds to sample nodes, which fails to adapt to varying environmental complexities. This rigidity leads to a critical mismatch: the model tends to over-sample in simple areas, causing computational redundancy, while under-sampling in high-uncertainty regions, increasing collision risks and compromising precision. To address this, we propose DGNav, a framework for Dynamic Topological Navigation, introducing a context-aware mechanism to modulate map density and connectivity on-the-fly. Our approach comprises two core innovations: (1) A Scene-Aware Adaptive Strategy that dynamically modulates graph construction thresholds based on the dispersion of predicted waypoints, enabling "densification on demand" in challenging environments; (2) A Dynamic Graph Transformer that reconstructs graph connectivity by fusing visual, linguistic, and geometric cues into dynamic edge weights, enabling the agent to filter out topological noise and enhancing instruction adherence. Extensive experiments on the R2R-CE and RxR-CE benchmarks demonstrate DGNav exhibits superior navigation performance and strong generalization capabilities. Furthermore, ablation studies confirm that our framework achieves an optimal trade-off between navigation efficiency and safe exploration. The code is available at https://github.com/shannanshouyin/DGNav.
翻译:连续环境下的视觉语言导航(VLN-CE)面临一个核心挑战:如何将高层语言指令落实到精确、安全且长程的空间动作中。显式拓扑图已被证明是为此类任务提供鲁棒空间记忆的关键方案。然而,现有的拓扑规划方法普遍存在“粒度刚性”问题。具体而言,这些方法通常依赖固定的几何阈值来采样节点,无法适应多变的环境复杂度。这种刚性导致一个关键的不匹配:模型在简单区域倾向于过度采样,造成计算冗余;而在高不确定性区域则采样不足,增加碰撞风险并损害导航精度。为解决此问题,我们提出DGNav——一个动态拓扑导航框架,引入上下文感知机制以实时调节地图密度与连通性。我们的方法包含两项核心创新:(1)场景感知自适应策略:基于预测路径点的离散程度动态调整构图阈值,实现在复杂环境中“按需稠密化”;(2)动态图Transformer:通过融合视觉、语言与几何特征生成动态边权重以重构图连通性,使智能体能过滤拓扑噪声并提升指令遵循能力。在R2R-CE与RxR-CE基准上的大量实验表明,DGNav展现出卓越的导航性能与强大的泛化能力。消融实验进一步验证了本框架能在导航效率与安全探索间取得最优平衡。代码已开源:https://github.com/shannanshouyin/DGNav。