Simulation offers a scalable and low-cost way to enrich vision-language-action (VLA) training, reducing reliance on expensive real-robot demonstrations. However, most sim-real co-training methods rely on supervised fine-tuning (SFT), which treats simulation as a static source of demonstrations and does not exploit large-scale closed-loop interaction. Consequently, real-world gains and generalization are often limited. In this paper, we propose an \underline{\textit{RL}}-based sim-real \underline{\textit{Co}}-training \modify{(RL-Co)} framework that leverages interactive simulation while preserving real-world capabilities. Our method follows a generic two-stage design: we first warm-start the policy with SFT on a mixture of real and simulated demonstrations, then fine-tune it with reinforcement learning in simulation while adding an auxiliary supervised loss on real-world data to anchor the policy and mitigate catastrophic forgetting. We evaluate our framework on four real-world tabletop manipulation tasks using two representative VLA architectures, OpenVLA and $π_{0.5}$, and observe consistent improvements over real-only fine-tuning and SFT-based co-training, including +24% real-world success on OpenVLA and +20% on $π_{0.5}$. Beyond higher success rates, RL co-training yields stronger generalization to unseen task variations and substantially improved real-world data efficiency, providing a practical and scalable pathway for leveraging simulation to enhance real-robot deployment.
翻译:仿真为丰富视觉-语言-动作(VLA)训练提供了一种可扩展且低成本的途径,减少了对昂贵真实机器人演示数据的依赖。然而,大多数仿真-现实协同训练方法依赖于监督微调(SFT),其将仿真视为静态的演示数据源,未能利用大规模闭环交互。因此,现实世界的性能提升和泛化能力往往受限。本文提出一种基于\underline{\textit{强化学习}}的仿真-现实\underline{\textit{协同}}训练\modify{(RL-Co)}框架,该框架在利用交互式仿真的同时,保留了真实世界的能力。我们的方法遵循通用的两阶段设计:首先,我们使用真实与仿真演示数据的混合进行SFT以预热策略;随后,在仿真环境中通过强化学习对策略进行微调,同时引入真实世界数据的辅助监督损失,以锚定策略并缓解灾难性遗忘。我们在四种真实世界桌面操作任务上,使用两种代表性VLA架构(OpenVLA 和 $π_{0.5}$)评估了我们的框架,观察到相较于仅使用真实数据微调以及基于SFT的协同训练,我们的方法取得了持续性的性能提升,包括OpenVLA在真实世界成功率提升+24%,$π_{0.5}$提升+20%。除了更高的成功率,强化学习协同训练还带来了对未见任务变体更强的泛化能力,并显著提高了真实世界数据效率,为利用仿真增强真实机器人部署提供了一条实用且可扩展的路径。