Existing vision-language-action (VLA) models act in 3D real-world but are typically built on 2D encoders, leaving a spatial reasoning gap that limits generalization and adaptability. Recent 3D integration techniques for VLAs either require specialized sensors and transfer poorly across modalities, or inject weak cues that lack geometry and degrade vision-language alignment. In this work, we introduce FALCON (From Spatial to Action), a novel paradigm that injects rich 3D spatial tokens into the action head. FALCON leverages spatial foundation models to deliver strong geometric priors from RGB alone, and includes an Embodied Spatial Model that can optionally fuse depth, or pose for higher fidelity when available, without retraining or architectural changes. To preserve language reasoning, spatial tokens are consumed by a Spatial-Enhanced Action Head rather than being concatenated into the vision-language backbone. These designs enable FALCON to address limitations in spatial representation, modality transferability, and alignment. In comprehensive evaluations across three simulation benchmarks and eleven real-world tasks, our proposed FALCON achieves state-of-the-art performance, consistently surpasses competitive baselines, and remains robust under clutter, spatial-prompt conditioning, and variations in object scale and height.
翻译:现有的视觉-语言-动作(VLA)模型在三维现实世界中执行动作,但通常构建于二维编码器之上,这留下了空间推理的鸿沟,限制了其泛化能力和适应性。近期针对VLA的三维集成技术要么需要专用传感器且跨模态迁移能力差,要么注入缺乏几何信息的弱线索并损害视觉-语言对齐。本工作中,我们提出了FALCON(From Spatial to Action),一种将丰富的三维空间标记注入动作头部的新范式。FALCON利用空间基础模型,仅从RGB图像即可提供强几何先验,并包含一个具身空间模型,该模型可选择性地在可用时融合深度或姿态信息以获取更高保真度,而无需重新训练或改变架构。为保持语言推理能力,空间标记由空间增强动作头部处理,而非拼接到视觉-语言骨干网络中。这些设计使FALCON能够解决空间表示、模态可迁移性和对齐方面的局限。在三个仿真基准和十一个现实世界任务的综合评估中,我们提出的FALCON取得了最先进的性能,持续超越竞争基线,并在杂乱环境、空间提示条件以及物体尺度和高度变化下保持鲁棒性。