World models integrate raw data from various modalities, such as images and language to simulate comprehensive interactions in the world, thereby displaying crucial roles in fields like mixed reality and robotics. Yet, applying the world model for accurate video prediction is quite challenging due to the complex and dynamic intentions of the various scenes in practice. In this paper, inspired by the human rethinking process, we decompose the complex video prediction into four meta-tasks that enable the world model to handle this issue in a more fine-grained manner. Alongside these tasks, we introduce a new benchmark named Embodied Video Anticipation Benchmark (EVA-Bench) to provide a well-rounded evaluation. EVA-Bench focused on evaluating the video prediction ability of human and robot actions, presenting significant challenges for both the language model and the generation model. Targeting embodied video prediction, we propose the Embodied Video Anticipator (EVA), a unified framework aiming at video understanding and generation. EVA integrates a video generation model with a visual language model, effectively combining reasoning capabilities with high-quality generation. Moreover, to enhance the generalization of our framework, we tailor-designed a multi-stage pretraining paradigm that adaptatively ensembles LoRA to produce high-fidelity results. Extensive experiments on EVA-Bench highlight the potential of EVA to significantly improve performance in embodied scenes, paving the way for large-scale pre-trained models in real-world prediction tasks.
翻译:世界模型整合来自图像与语言等多种模态的原始数据,以模拟世界中的综合交互,从而在混合现实与机器人等领域展现出关键作用。然而,由于实际场景中复杂多变的动态意图,应用世界模型进行精确视频预测极具挑战性。本文受人类反思过程的启发,将复杂视频预测分解为四项元任务,使世界模型能以更细粒度的方式处理此问题。结合这些任务,我们引入名为具身视频预测基准(EVA-Bench)的新基准,以提供全面评估。EVA-Bench专注于评估人类与机器人行为的视频预测能力,对语言模型与生成模型均构成显著挑战。针对具身视频预测,我们提出具身视频预测器(EVA),这是一个面向视频理解与生成的统一框架。EVA将视频生成模型与视觉语言模型相结合,有效融合推理能力与高质量生成。此外,为增强框架的泛化能力,我们定制设计了多阶段预训练范式,自适应集成LoRA以生成高保真结果。在EVA-Bench上的大量实验突显了EVA在具身场景中显著提升性能的潜力,为大规模预训练模型在现实世界预测任务中的应用铺平道路。