Text-goal instance navigation (TGIN) asks an agent to resolve a single, free-form description into actions that reach the correct object instance among same-category distractors. We present \textit{Context-Nav} that elevates long, contextual captions from a local matching cue to a global exploration prior and verifies candidates through 3D spatial reasoning. First, we compute dense text-image alignments for a value map that ranks frontiers -- guiding exploration toward regions consistent with the entire description rather than early detections. Second, upon observing a candidate, we perform a viewpoint-aware relation check: the agent samples plausible observer poses, aligns local frames, and accepts a target only if the spatial relations can be satisfied from at least one viewpoint. The pipeline requires no task-specific training or fine-tuning; we attain state-of-the-art performance on InstanceNav and CoIN-Bench. Ablations show that (i) encoding full captions into the value map avoids wasted motion and (ii) explicit, viewpoint-aware 3D verification prevents semantically plausible but incorrect stops. This suggests that geometry-grounded spatial reasoning is a scalable alternative to heavy policy training or human-in-the-loop interaction for fine-grained instance disambiguation in cluttered 3D scenes.
翻译:文本目标实例导航要求智能体将单个自由形式描述解析为动作,以在同类干扰物中抵达正确的对象实例。我们提出Context-Nav方法,将长上下文描述从局部匹配线索提升为全局探索先验,并通过三维空间推理验证候选目标。首先,我们计算密集的文本-图像对齐以生成价值图,该图对探索前沿区域进行排序——引导探索朝向与完整描述一致而非仅与早期检测匹配的区域。其次,在观测到候选目标时,我们执行视点感知关系验证:智能体采样合理的观察者位姿,对齐局部坐标系,并仅当从至少一个视点能够满足空间关系时才接受目标。该流程无需任务特定训练或微调;我们在InstanceNav和CoIN-Bench数据集上取得了最先进的性能。消融实验表明:(1)将完整描述编码至价值图可避免无效移动;(2)显式的视点感知三维验证能防止语义合理但错误的停止决策。这表明基于几何的空间推理是一种可扩展的替代方案,可替代繁重的策略训练或人机交互,用于杂乱三维场景中的细粒度实例区分。