Recent advances in driving-scene generation and reconstruction have demonstrated significant potential for enhancing autonomous driving systems by producing scalable and controllable training data. Existing generation methods primarily focus on synthesizing diverse and high-fidelity driving videos; however, due to limited 3D consistency and sparse viewpoint coverage, they struggle to support convenient and high-quality novel-view synthesis (NVS). Conversely, recent 3D/4D reconstruction approaches have significantly improved NVS for real-world driving scenes, yet inherently lack generative capabilities. To overcome this dilemma between scene generation and reconstruction, we propose WorldSplat, a novel feed-forward framework for 4D driving-scene generation. Our approach effectively generates consistent multi-track videos through two key steps: (i) We introduce a 4D-aware latent diffusion model integrating multi-modal information to produce pixel-aligned 4D Gaussians in a feed-forward manner. (ii) Subsequently, we refine the novel view videos rendered from these Gaussians using a enhanced video diffusion model. Extensive experiments conducted on benchmark datasets demonstrate that WorldSplat effectively generates high-fidelity, temporally and spatially consistent multi-track novel view driving videos. Project: https://wm-research.github.io/worldsplat/
翻译:近期,驾驶场景生成与重建领域的进展已展现出通过生成可扩展且可控的训练数据来增强自动驾驶系统的巨大潜力。现有的生成方法主要侧重于合成多样且高保真的驾驶视频;然而,由于3D一致性有限和视角覆盖稀疏,它们难以支持便捷且高质量的新视角合成。相反,近期的3D/4D重建方法显著提升了真实世界驾驶场景的新视角合成质量,但本质上缺乏生成能力。为了克服场景生成与重建之间的这一困境,我们提出了WorldSplat,一种新颖的前馈式4D驾驶场景生成框架。我们的方法通过两个关键步骤有效生成一致的多轨迹视频:(i) 我们引入了一个集成多模态信息的4D感知潜在扩散模型,以前馈方式生成像素对齐的4D高斯体。(ii) 随后,我们使用一个增强的视频扩散模型对这些高斯体渲染出的新视角视频进行细化。在基准数据集上进行的大量实验表明,WorldSplat能够有效生成高保真、时空一致的多轨迹新视角驾驶视频。项目地址:https://wm-research.github.io/worldsplat/