Recent advancements utilizing large-scale video data for learning video generation models demonstrate significant potential in understanding complex physical dynamics. It suggests the feasibility of leveraging diverse robot trajectory data to develop a unified, dynamics-aware model to enhance robot manipulation. However, given the relatively small amount of available robot data, directly fitting data without considering the relationship between visual observations and actions could lead to suboptimal data utilization. To this end, we propose VidMan (Video Diffusion for Robot Manipulation), a novel framework that employs a two-stage training mechanism inspired by dual-process theory from neuroscience to enhance stability and improve data utilization efficiency. Specifically, in the first stage, VidMan is pre-trained on the Open X-Embodiment dataset (OXE) for predicting future visual trajectories in a video denoising diffusion manner, enabling the model to develop a long horizontal awareness of the environment's dynamics. In the second stage, a flexible yet effective layer-wise self-attention adapter is introduced to transform VidMan into an efficient inverse dynamics model that predicts action modulated by the implicit dynamics knowledge via parameter sharing. Our VidMan framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7% relative improvement, and demonstrates over 9% precision gains on the OXE small-scale dataset. These results provide compelling evidence that world models can significantly enhance the precision of robot action prediction. Codes and models will be public.
翻译:近期利用大规模视频数据学习视频生成模型的进展,在理解复杂物理动态方面展现出巨大潜力。这表明利用多样化机器人轨迹数据构建统一、具备动态感知能力的模型以增强机器人操作具有可行性。然而,考虑到可用机器人数据量相对有限,若直接拟合数据而不考虑视觉观测与动作间的关系,可能导致数据利用效率低下。为此,我们提出VidMan(面向机器人操作的视频扩散框架),该新颖框架采用受神经科学双过程理论启发的两阶段训练机制,以提升稳定性并改善数据利用效率。具体而言,在第一阶段,VidMan基于Open X-Embodiment数据集(OXE)以视频去噪扩散方式进行未来视觉轨迹预测的预训练,使模型能够建立对环境动态的长时程感知能力。在第二阶段,通过引入灵活高效的分层自注意力适配器,将VidMan转化为高效逆动态模型,该模型通过参数共享机制利用隐式动态知识调制动作预测。我们的VidMan框架在CALVIN基准测试中超越了当前最先进的基线模型GR-1,实现了11.7%的相对性能提升,并在OXE小规模数据集上表现出超过9%的精度增益。这些结果为世界模型能够显著提升机器人动作预测精度提供了有力证据。代码与模型将公开。