The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.
翻译:操作数据的多样性、数量与质量对于训练有效的机器人策略至关重要。然而,由于硬件与物理设置的限制,在多样化环境中收集大规模真实世界操作数据仍难以扩展。近期研究利用基于文本提示的图像扩散模型,通过改变视觉观测中的背景与桌面物体来增强操作数据。然而,这些方法往往忽视了先进策略模型所需的多视角与时序一致性观测的实际需求。此外,仅凭文本提示无法可靠地指定场景设置。为了向扩散模型提供明确的视觉引导,我们引入了视觉身份提示技术,通过提供示例图像作为条件输入来引导生成目标场景设置。为此,我们还构建了一个可扩展的流程,用于从大规模机器人数据集中筛选构建视觉身份池。使用我们增强的操作数据训练下游视觉-语言-动作策略模型与视觉运动策略模型,在仿真与真实机器人环境中均实现了稳定的性能提升。