Video world models aim to simulate dynamic, real-world environments, yet existing methods struggle to provide unified and precise control over camera and multi-object motion, as videos inherently operate dynamics in the projected 2D image plane. To bridge this gap, we introduce VerseCrafter, a 4D-aware video world model that enables explicit and coherent control over both camera and object dynamics within a unified 4D geometric world state. Our approach is centered on a novel 4D Geometric Control representation, which encodes the world state through a static background point cloud and per-object 3D Gaussian trajectories. This representation captures not only an object's path but also its probabilistic 3D occupancy over time, offering a flexible, category-agnostic alternative to rigid bounding boxes or parametric models. These 4D controls are rendered into conditioning signals for a pretrained video diffusion model, enabling the generation of high-fidelity, view-consistent videos that precisely adhere to the specified dynamics. Unfortunately, another major challenge lies in the scarcity of large-scale training data with explicit 4D annotations. We address this by developing an automatic data engine that extracts the required 4D controls from in-the-wild videos, allowing us to train our model on a massive and diverse dataset.
翻译:视频世界模型旨在模拟动态的真实世界环境,然而现有方法难以对相机和多物体运动提供统一且精确的控制,因为视频本质上是在投影的二维图像平面上运作动态的。为弥合这一差距,我们提出了VerseCrafter,一个具备四维感知能力的视频世界模型,能够在统一的四维几何世界状态中对相机和物体动力学实现显式且连贯的控制。我们的方法核心是一种新颖的四维几何控制表示,它通过静态背景点云和每个物体的三维高斯轨迹来编码世界状态。该表示不仅捕捉物体的运动路径,还捕捉其随时间变化的概率性三维占据情况,为刚性边界框或参数化模型提供了一种灵活且与类别无关的替代方案。这些四维控制被渲染为预训练视频扩散模型的条件信号,从而能够生成高保真、视角一致的视频,并精确遵循指定的动态。然而,另一个主要挑战在于缺乏具有显式四维标注的大规模训练数据。我们通过开发一个自动数据引擎来解决这个问题,该引擎从真实世界视频中提取所需的四维控制,使我们能够在海量且多样化的数据集上训练我们的模型。