Traditional reinforcement learning-based robotic control methods are often task-specific and fail to generalize across diverse environments or unseen objects and instructions. Visual Language Models (VLMs) demonstrate strong scene understanding and planning capabilities but lack the ability to generate actionable policies tailored to specific robotic embodiments. To address this, Visual-Language-Action (VLA) models have emerged, yet they face challenges in long-horizon spatial reasoning and grounded task planning. In this work, we propose the Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning, Emma-X. Emma-X leverages our constructed hierarchical embodiment dataset based on BridgeV2, containing 60,000 robot manipulation trajectories auto-annotated with grounded task reasoning and spatial guidance. Additionally, we introduce a trajectory segmentation strategy based on gripper states and motion trajectories, which can help mitigate hallucination in grounding subtask reasoning generation. Experimental results demonstrate that Emma-X achieves superior performance over competitive baselines, particularly in real-world robotic tasks requiring spatial reasoning.
翻译:传统的基于强化学习的机器人控制方法通常针对特定任务,难以泛化至多样环境或未见过物体与指令。视觉语言模型(VLMs)展现出强大的场景理解与规划能力,但缺乏针对特定机器人实体生成可执行策略的能力。为解决此问题,视觉-语言-动作(VLA)模型应运而生,然而其在长时程空间推理与基于实体的任务规划方面仍面临挑战。本研究提出具有基于实体的思维链与前瞻空间推理能力的具身多模态动作模型Emma-X。Emma-X利用我们基于BridgeV2构建的分层具身数据集,该数据集包含60,000条机器人操作轨迹,并自动标注了基于实体的任务推理与空间引导信息。此外,我们提出了一种基于夹持器状态与运动轨迹的轨迹分割策略,该策略有助于缓解在生成基于实体的子任务推理过程中产生的幻觉问题。实验结果表明,Emma-X在多项基准测试中均优于现有竞争模型,尤其在需要空间推理的真实世界机器人任务中表现突出。