World models can foresee the outcomes of different actions, which is of paramount importance for autonomous driving. Nevertheless, existing driving world models still have limitations in generalization to unseen environments, prediction fidelity of critical details, and action controllability for flexible application. In this paper, we present Vista, a generalizable driving world model with high fidelity and versatile controllability. Based on a systematic diagnosis of existing methods, we introduce several key ingredients to address these limitations. To accurately predict real-world dynamics at high resolution, we propose two novel losses to promote the learning of moving instances and structural information. We also devise an effective latent replacement approach to inject historical frames as priors for coherent long-horizon rollouts. For action controllability, we incorporate a versatile set of controls from high-level intentions (command, goal point) to low-level maneuvers (trajectory, angle, and speed) through an efficient learning strategy. After large-scale training, the capabilities of Vista can seamlessly generalize to different scenarios. Extensive experiments on multiple datasets show that Vista outperforms the most advanced general-purpose video generator in over 70% of comparisons and surpasses the best-performing driving world model by 55% in FID and 27% in FVD. Moreover, for the first time, we utilize the capacity of Vista itself to establish a generalizable reward for real-world action evaluation without accessing the ground truth actions.
翻译:世界模型能够预见不同行动的结果,这对自动驾驶至关重要。然而,现有的驾驶世界模型在泛化到未见环境、关键细节的预测保真度以及灵活应用所需的行动可控性方面仍存在局限。本文提出Vista,一种具有高保真度与多样化可控性的可泛化驾驶世界模型。基于对现有方法的系统性诊断,我们引入了若干关键要素以应对这些局限。为在高分辨率下准确预测真实世界动态,我们提出了两种新颖的损失函数以促进移动实例与结构信息的学习。我们还设计了一种有效的潜在替换方法,将历史帧作为先验注入以实现连贯的长时程推演。针对行动可控性,我们通过一种高效的学习策略,整合了从高层意图(指令、目标点)到低层操作(轨迹、角度、速度)的多样化控制集。经过大规模训练后,Vista的能力可无缝泛化至不同场景。在多个数据集上的大量实验表明,Vista在超过70%的对比中优于最先进的通用视频生成器,并在FID指标上超越最佳性能驾驶世界模型55%,在FVD指标上超越27%。此外,我们首次利用Vista自身的能力,在不依赖真实行动数据的情况下,建立了一种用于现实世界行动评估的可泛化奖励机制。