The growing adoption of Vision-Language-Action (VLA) models in embodied AI intensifies the demand for diverse manipulation demonstrations. However, high costs associated with data collection often result in insufficient data coverage across all scenarios, which limits the performance of the models. It is observed that the spatial reasoning phase (SRP) in large workspace dominates the failure cases. Fortunately, this data can be collected with low cost, underscoring the potential of leveraging inexpensive data to improve model performance. In this paper, we introduce the DataPlatter method, a framework that decouples training trajectories into distinct task stages and leverages abundant easily collectible SRP data to enhance VLA model's generalization. Through analysis we demonstrate that sub-task-specific training with additional SRP data with proper proportion can act as a performance catalyst for robot manipulation, maximizing the utilization of costly physical interaction phase (PIP) data. Experiments show that through introducing large proportion of cost-effective SRP trajectories into a limited set of PIP data, we can achieve a maximum improvement of 41\% on success rate in zero-shot scenes, while with the ability to transfer manipulation skill to novel targets.
翻译:随着视觉-语言-动作(VLA)模型在具身人工智能领域的日益普及,对多样化操作演示数据的需求日益增长。然而,数据采集的高昂成本通常导致无法覆盖所有场景,从而限制了模型的性能。研究发现,大工作空间中的空间推理阶段(SRP)是导致失败案例的主要因素。幸运的是,此类数据的采集成本较低,这凸显了利用廉价数据提升模型性能的潜力。本文提出DataPlatter方法,该框架将训练轨迹解耦为不同的任务阶段,并利用大量易于采集的SRP数据来增强VLA模型的泛化能力。通过分析,我们证明以适当比例加入额外SRP数据的子任务专项训练,能够作为机器人操作性能的催化剂,从而最大化昂贵物理交互阶段(PIP)数据的利用率。实验表明,通过在有限的PIP数据集中引入大比例的高性价比SRP轨迹,我们能在零样本场景中实现高达41%的成功率提升,同时具备将操作技能迁移至新目标物体的能力。