The pursuit of general-purpose robotic manipulation is hindered by the scarcity of diverse, real-world interaction data. Unlike data collection from web in vision or language, robotic data collection is an active process incurring prohibitive physical costs. Consequently, automated task curation to maximize data value remains a critical yet under-explored challenge. Existing manual methods are unscalable and biased toward common tasks, while off-the-shelf foundation models often hallucinate physically infeasible instructions. To address this, we introduce RoboGene, an agentic framework designed to automate the generation of diverse, physically plausible manipulation tasks across single-arm, dual-arm, and mobile robots. RoboGene integrates three core components: diversity-driven sampling for broad task coverage, self-reflection mechanisms to enforce physical constraints, and human-in-the-loop refinement for continuous improvement. We conduct extensive quantitative analysis and large-scale real-world experiments, collecting datasets of 18k trajectories and introducing novel metrics to assess task quality, feasibility, and diversity. Results demonstrate that RoboGene significantly outperforms state-of-the-art foundation models (e.g., GPT-4o, Gemini 2.5 Pro). Furthermore, real-world experiments show that VLA models pre-trained with RoboGene achieve higher success rates and superior generalization, underscoring the importance of high-quality task generation. Our project is available at https://robogene-boost-vla.github.io.
翻译:通用机器人操作能力的实现受到多样化真实世界交互数据稀缺的阻碍。与视觉或语言领域可从网络获取数据不同,机器人数据采集是需付出高昂物理成本的主动过程。因此,通过自动化任务生成来最大化数据价值,仍是关键且尚未充分探索的挑战。现有人工方法难以扩展且偏向常见任务,而现成的基础模型常产生物理不可行的指令。为此,我们提出RoboGene——一个旨在为单臂、双臂及移动机器人自动生成多样化、物理可行的操作任务的智能体框架。RoboGene整合了三个核心组件:实现广泛任务覆盖的多样性驱动采样、强制执行物理约束的自反思机制,以及持续优化的人机协同精炼。我们进行了广泛的定量分析和大规模真实世界实验,采集了包含1.8万条轨迹的数据集,并提出了评估任务质量、可行性与多样性的新指标。实验结果表明,RoboGene显著优于现有最先进的基础模型(如GPT-4o、Gemini 2.5 Pro)。此外,真实世界实验表明,使用RoboGene预训练的视觉语言动作模型获得了更高的成功率和更优的泛化能力,这印证了高质量任务生成的重要性。项目地址:https://robogene-boost-vla.github.io。