Equipping embodied agents with the ability to reason about tasks, foresee physical outcomes, and generate precise actions is essential for general-purpose manipulation. While recent Vision-Language-Action (VLA) models have leveraged pre-trained foundation models, they typically focus on either linguistic planning or visual forecasting in isolation. These methods rarely integrate both capabilities simultaneously to guide action generation, leading to suboptimal performance in complex, long-horizon manipulation tasks. To bridge this gap, we propose BagelVLA, a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework. Initialized from a pretrained unified understanding and generative model, BagelVLA is trained to interleave textual reasoning and visual prediction directly into the action execution loop. To efficiently couple these modalities, we introduce Residual Flow Guidance (RFG), which initializes from current observation and leverages single-step denoising to extract predictive visual features, guiding action generation with minimal latency. Extensive experiments demonstrate that BagelVLA outperforms existing baselines by a significant margin on multiple simulated and real-world benchmarks, particularly in tasks requiring multi-stage reasoning.
翻译:为具身智能体赋予任务推理、物理结果预见和精确动作生成的能力,是实现通用操作的关键。尽管近期的视觉-语言-动作(VLA)模型已利用预训练的基础模型,但它们通常孤立地关注语言规划或视觉预测。这些方法很少同时整合两种能力来指导动作生成,导致在复杂的长时程操作任务中表现欠佳。为弥补这一差距,我们提出了BagelVLA,一个在统一框架内整合语言规划、视觉预测和动作生成的模型。该模型从一个预训练的统一理解与生成模型初始化,经过训练可将文本推理和视觉预测直接交错融入动作执行循环。为高效耦合这些模态,我们引入了残差流引导(RFG)机制,该机制从当前观测初始化,并利用单步去噪提取预测性视觉特征,以最小延迟指导动作生成。大量实验表明,BagelVLA在多个仿真和真实世界基准测试中显著优于现有基线方法,尤其是在需要多阶段推理的任务中。