A world model is essential for an agent to predict the future and plan in domains such as autonomous driving and robotics. To achieve this, recent advancements have focused on video generation, which has gained significant attention due to the impressive success of diffusion models. However, these models require substantial computational resources. To address these challenges, we propose a world model leveraging object-centric representation space using slot attention, guided by language instructions. Our model perceives the current state as an object-centric representation and predicts future states in this representation space conditioned on natural language instructions. This approach results in a more compact and computationally efficient model compared to diffusion-based generative alternatives. Furthermore, it flexibly predicts future states based on language instructions, and offers a significant advantage in manipulation tasks where object recognition is crucial. In this paper, we demonstrate that our latent predictive world model surpasses generative world models in visuo-linguo-motor control tasks, achieving superior sample and computation efficiency. We also investigate the generalization performance of the proposed method and explore various strategies for predicting actions using object-centric representations.
翻译:世界模型对于智能体在自动驾驶和机器人等领域进行未来预测与规划至关重要。为实现这一目标,近期研究聚焦于视频生成,并因扩散模型的显著成功而受到广泛关注。然而,这些模型需要大量计算资源。为应对这些挑战,我们提出一种利用对象中心表示空间的世界模型,该模型采用槽注意力机制,并以语言指令为引导。我们的模型将当前状态感知为对象中心表示,并在自然语言指令的条件下,于该表示空间中预测未来状态。与基于扩散的生成式替代方案相比,该方法构建的模型更为紧凑且计算高效。此外,它能够根据语言指令灵活预测未来状态,并在对象识别至关重要的操作任务中展现出显著优势。本文中,我们证明了所提出的潜在预测世界模型在视觉-语言-运动控制任务中超越了生成式世界模型,实现了更优的样本效率与计算效率。我们还研究了所提方法的泛化性能,并探索了利用对象中心表示预测动作的多种策略。