World models aim to learn action-controlled prediction models and have proven essential for the development of intelligent agents. However, most existing world models rely heavily on substantial action-labeled data and costly training, making it challenging to adapt to novel environments with heterogeneous actions through limited interactions. This limitation can hinder their applicability across broader domains. To overcome this challenge, we propose AdaWorld, an innovative world model learning approach that enables efficient adaptation. The key idea is to incorporate action information during the pretraining of world models. This is achieved by extracting latent actions from videos in a self-supervised manner, capturing the most critical transitions between frames. We then develop an autoregressive world model that conditions on these latent actions. This learning paradigm enables highly adaptable world models, facilitating efficient transfer and learning of new actions even with limited interactions and finetuning. Our comprehensive experiments across multiple environments demonstrate that AdaWorld achieves superior performance in both simulation quality and visual planning.
翻译:世界模型旨在学习受动作控制的预测模型,已被证明对智能体的开发至关重要。然而,现有世界模型大多严重依赖大量带动作标签的数据和昂贵的训练成本,难以通过有限交互适应具有异构动作的新环境。这一局限性阻碍了其在更广泛领域的适用性。为克服此挑战,我们提出AdaWorld,一种创新的世界模型学习方法,可实现高效适应。其核心思想是在世界模型预训练阶段融入动作信息。具体通过自监督方式从视频中提取隐式动作,以捕捉帧间最关键的转移。随后,我们构建了一个以这些隐式动作为条件的自回归世界模型。该学习范式使得世界模型具备高度适应性,即使在有限交互和微调条件下,也能促进新动作的高效迁移与学习。我们在多个环境中的综合实验表明,AdaWorld在仿真质量与视觉规划方面均取得了卓越性能。