While Vision-Language-Action (VLA) models have achieved remarkable success in ground-based embodied intelligence, their application to Aerial Manipulation Systems (AMS) remains a largely unexplored frontier. The inherent characteristics of AMS, including floating-base dynamics, strong coupling between the UAV and the manipulator, and the multi-step, long-horizon nature of operational tasks, pose severe challenges to existing VLA paradigms designed for static or 2D mobile bases. To bridge this gap, we propose AIR-VLA, the first VLA benchmark specifically tailored for aerial manipulation. We construct a physics-based simulation environment and release a high-quality multimodal dataset comprising 3000 manually teleoperated demonstrations, covering base manipulation, object & spatial understanding, semantic reasoning, and long-horizon planning. Leveraging this platform, we systematically evaluate mainstream VLA models and state-of-the-art VLM models. Our experiments not only validate the feasibility of transferring VLA paradigms to aerial systems but also, through multi-dimensional metrics tailored to aerial tasks, reveal the capabilities and boundaries of current models regarding UAV mobility, manipulator control, and high-level planning. AIR-VLA establishes a standardized testbed and data foundation for future research in general-purpose aerial robotics. The resource of AIR-VLA will be available at https://anonymous.4open.science/r/AIR-VLA-dataset-B5CC/.
翻译:尽管视觉-语言-动作(VLA)模型在地面具身智能领域已取得显著成功,但其在空中操作系统(AMS)中的应用仍是一个尚未充分探索的前沿领域。AMS固有的特性——包括浮动基座动力学、无人机与机械臂之间的强耦合性,以及操作任务的多步骤、长时域特性——对现有针对静态或二维移动基座设计的VLA范式构成了严峻挑战。为弥合这一差距,我们提出了AIR-VLA,这是首个专门为空中操作定制的VLA基准。我们构建了一个基于物理的仿真环境,并发布了一个包含3000条人工遥操作演示的高质量多模态数据集,涵盖基座操控、物体与空间理解、语义推理及长时域规划。利用该平台,我们系统评估了主流VLA模型与前沿视觉语言模型(VLM)。实验不仅验证了将VLA范式迁移至空中系统的可行性,还通过针对空中任务定制的多维度量指标,揭示了当前模型在无人机机动性、机械臂控制及高层规划方面的能力与局限。AIR-VLA为通用空中机器人技术的未来研究建立了标准化测试平台与数据基础。AIR-VLA的相关资源将在 https://anonymous.4open.science/r/AIR-VLA-dataset-B5CC/ 公开。