Building generic robotic manipulation systems often requires large amounts of real-world data, which can be dificult to collect. Synthetic data generation offers a promising alternative, but limiting the sim-to-real gap requires significant engineering efforts. To reduce this engineering effort, we investigate the use of pretrained text-to-image diffusion models for texturing synthetic images and compare this approach with using random textures, a common domain randomization technique in synthetic data generation. We focus on generating object-centric representations, such as keypoints and segmentation masks, which are important for robotic manipulation and require precise annotations. We evaluate the efficacy of the texturing methods by training models on the synthetic data and measuring their performance on real-world datasets for three object categories: shoes, T-shirts, and mugs. Surprisingly, we find that texturing using a diffusion model performs on par with random textures, despite generating seemingly more realistic images. Our results suggest that, for now, using diffusion models for texturing does not benefit synthetic data generation for robotics. The code, data and trained models are available at \url{https://github.com/tlpss/diffusing-synthetic-data.git}.
翻译:构建通用机器人操作系统通常需要大量真实世界数据,这些数据往往难以收集。合成数据生成提供了一种有前景的替代方案,但缩小仿真与现实之间的差距需要大量工程投入。为降低工程复杂度,本研究探索使用预训练的文本到图像扩散模型为合成图像生成纹理,并将该方法与合成数据生成中常用的领域随机化技术——随机纹理生成进行对比。我们专注于生成以物体为中心的表征形式(如关键点和分割掩码),这些表征对机器人操作至关重要且需要精确标注。通过使用合成数据训练模型,并在鞋类、T恤和马克杯三类物体的真实数据集上评估性能,我们系统比较了不同纹理生成方法的有效性。令人意外的是,尽管扩散模型生成的图像在视觉上更为逼真,但其纹理生成效果与随机纹理方法相当。研究结果表明,当前阶段利用扩散模型进行纹理生成并未对机器人领域的合成数据生成产生显著增益。相关代码、数据及训练模型已发布于 \url{https://github.com/tlpss/diffusing-synthetic-data.git}。