Large video models, pretrained on massive amounts of Internet video, provide a rich source of physical knowledge about the dynamics and motions of objects and tasks. However, video models are not grounded in the embodiment of an agent, and do not describe how to actuate the world to reach the visual states depicted in a video. To tackle this problem, current methods use a separate vision-based inverse dynamic model trained on embodiment-specific data to map image states to actions. Gathering data to train such a model is often expensive and challenging, and this model is limited to visual settings similar to the ones in which data are available. In this paper, we investigate how to directly ground video models to continuous actions through self-exploration in the embodied environment -- using generated video states as visual goals for exploration. We propose a framework that uses trajectory level action generation in combination with video guidance to enable an agent to solve complex tasks without any external supervision, e.g., rewards, action labels, or segmentation masks. We validate the proposed approach on 8 tasks in Libero, 6 tasks in MetaWorld, 4 tasks in Calvin, and 12 tasks in iThor Visual Navigation. We show how our approach is on par with or even surpasses multiple behavior cloning baselines trained on expert demonstrations while without requiring any action annotations.
翻译:在大量互联网视频上预训练的大型视频模型,为物体动态、运动及任务执行提供了丰富的物理知识来源。然而,视频模型并未与智能体的具身化相结合,也无法描述如何通过驱动世界来实现视频中描绘的视觉状态。为解决此问题,现有方法通常使用一个在具身化特定数据上单独训练的基于视觉的逆动力学模型,将图像状态映射到动作。收集数据训练此类模型通常成本高昂且具有挑战性,且该模型仅限于在数据可获取的相似视觉场景中适用。本文研究如何通过智能体在具身环境中的自主探索——将生成的视频状态作为探索的视觉目标——直接将视频模型与连续动作关联。我们提出一个框架,结合轨迹级动作生成与视频引导,使智能体能够在没有任何外部监督(如奖励、动作标签或分割掩码)的情况下解决复杂任务。我们在Libero的8个任务、MetaWorld的6个任务、Calvin的4个任务以及iThor视觉导航的12个任务上验证了所提方法。结果表明,我们的方法在无需任何动作标注的情况下,其性能与多个基于专家演示的行为克隆基线相当甚至更优。