Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. Some works attempt to extend this approach to autonomous driving by building video-based world models capable of generating realistic future video sequences and predicting ego states. However, prior works tend to produce unsatisfactory results, as the classic GPT framework is designed to handle 1D contextual information, such as text, and lacks the inherent ability to model the spatial and temporal dynamics essential for video generation. In this paper, we present DrivingWorld, a GPT-style world model for autonomous driving, featuring several spatial-temporal fusion mechanisms. This design enables effective modeling of both spatial and temporal dynamics, facilitating high-fidelity, long-duration video generation. Specifically, we propose a next-state prediction strategy to model temporal coherence between consecutive frames and apply a next-token prediction strategy to capture spatial information within each frame. To further enhance generalization ability, we propose a novel masking strategy and reweighting strategy for token prediction to mitigate long-term drifting issues and enable precise control. Our work demonstrates the ability to produce high-fidelity and consistent video clips of over 40 seconds in duration, which is over 2 times longer than state-of-the-art driving world models. Experiments show that, in contrast to prior works, our method achieves superior visual quality and significantly more accurate controllable future video generation. Our code is available at https://github.com/YvanYin/DrivingWorld.
翻译:近期自回归(AR)生成模型在自然语言处理领域的成功(例如GPT系列)激发了在视觉任务中复现这一成果的努力。部分研究尝试将这种方法扩展到自动驾驶领域,通过构建基于视频的世界模型来生成逼真的未来视频序列并预测自车状态。然而,先前的研究往往产生不尽人意的结果,因为经典的GPT框架是为处理一维上下文信息(如文本)而设计的,缺乏对视频生成至关重要的时空动态建模的内在能力。本文提出DrivingWorld,一种用于自动驾驶的GPT风格世界模型,其具备多种时空融合机制。该设计能够有效建模时空动态,实现高保真、长时程的视频生成。具体而言,我们提出一种下一状态预测策略来建模连续帧间的时间连贯性,并采用下一标记预测策略来捕捉每帧内的空间信息。为进一步增强泛化能力,我们提出一种新颖的标记预测掩码策略和重加权策略,以缓解长期漂移问题并实现精确控制。我们的工作展示了生成超过40秒时长的高保真且连贯视频片段的能力,这超过了现有最优驾驶世界模型时长的2倍以上。实验表明,与先前工作相比,我们的方法实现了更优的视觉质量和显著更精确的可控未来视频生成。代码发布于https://github.com/YvanYin/DrivingWorld。