Interpreting visual observations and natural language instructions for complex task execution remains a key challenge in robotics and AI. Despite recent advances, language-driven navigation is still difficult, particularly for UAVs in small-scale 3D environments. Existing Vision-Language Navigation (VLN) approaches are mostly designed for ground robots and struggle to generalize to aerial tasks that require full 3D spatial reasoning. The emergence of large Vision-Language Models (VLMs), such as GPT and Claude, enables zero-shot semantic reasoning from visual and textual inputs. However, these models lack spatial grounding and are not directly applicable to navigation. To address these limitations, SoraNav is introduced, an adaptive UAV navigation framework that integrates zero-shot VLM reasoning with geometry-aware decision-making. Geometric priors are incorporated into image annotations to constrain the VLM action space and improve decision quality. A hybrid switching strategy leverages navigation history to alternate between VLM reasoning and geometry-based exploration, mitigating dead-ends and redundant revisits. A PX4-based hardware-software platform, comprising both a digital twin and a physical micro-UAV, enables reproducible evaluation. Experimental results show that in 2.5D scenarios, our method improves Success Rate (SR) by 25.7% and Success weighted by Path Length (SPL) by 17%. In 3D scenarios, it improves SR by 29.5% and SPL by 18.5% relative to the baseline.
翻译:解读视觉观测与自然语言指令以执行复杂任务,仍是机器人与人工智能领域的关键挑战。尽管近期取得进展,语言驱动的导航依然困难,尤其对于小型三维环境中的无人机而言。现有的视觉语言导航方法大多为地面机器人设计,难以泛化至需要完整三维空间推理的空中任务。大型视觉语言模型(如GPT和Claude)的出现,使得从视觉与文本输入进行零样本语义推理成为可能。然而,这些模型缺乏空间基础,无法直接应用于导航。为应对这些局限,本文提出了SoraNav,一种集成零样本VLM推理与几何感知决策的自适应无人机导航框架。通过将几何先验融入图像标注,以约束VLM动作空间并提升决策质量。一种混合切换策略利用导航历史,在VLM推理与基于几何的探索之间交替,从而缓解死胡同与重复访问问题。基于PX4的软硬件平台,包含数字孪生与物理微型无人机,实现了可复现的评估。实验结果表明,在2.5D场景中,本方法将成功率提升了25.7%,路径长度加权成功率提升了17%;在3D场景中,相较于基线,成功率提升了29.5%,路径长度加权成功率提升了18.5%。