Scaling Vision-Language-Action (VLA) models on large-scale data offers a promising path to achieving a more generalized driving intelligence. However, VLA models are limited by a ``supervision deficit'': the vast model capacity is supervised by sparse, low-dimensional actions, leaving much of their representational power underutilized. To remedy this, we propose \textbf{DriveVLA-W0}, a training paradigm that employs world modeling to predict future images. This task generates a dense, self-supervised signal that compels the model to learn the underlying dynamics of the driving environment. We showcase the paradigm's versatility by instantiating it for two dominant VLA archetypes: an autoregressive world model for VLAs that use discrete visual tokens, and a diffusion world model for those operating on continuous visual features. Building on the rich representations learned from world modeling, we introduce a lightweight action expert to address the inference latency for real-time deployment. Extensive experiments on the NAVSIM v1/v2 benchmark and a 680x larger in-house dataset demonstrate that DriveVLA-W0 significantly outperforms BEV and VLA baselines. Crucially, it amplifies the data scaling law, showing that performance gains accelerate as the training dataset size increases.
翻译:在自动驾驶领域,对视觉-语言-动作(VLA)模型进行大规模数据缩放,为实现更通用的驾驶智能提供了一条有前景的路径。然而,VLA模型受到“监督赤字”的限制:其庞大的模型容量仅由稀疏、低维度的动作进行监督,导致其大部分表征能力未被充分利用。为解决此问题,我们提出了 \textbf{DriveVLA-W0},一种利用世界模型来预测未来图像的训练范式。该任务产生了一种密集的自监督信号,迫使模型学习驾驶环境的底层动态。我们通过将该范式实例化到两种主流的VLA架构中,展示了其通用性:为使用离散视觉标记的VLA构建自回归世界模型,以及为在连续视觉特征上操作的VLA构建扩散世界模型。基于从世界建模中学到的丰富表征,我们引入了一个轻量级动作专家,以解决实时部署中的推理延迟问题。在NAVSIM v1/v2基准测试和一个规模扩大680倍的内部数据集上进行的大量实验表明,DriveVLA-W0显著优于BEV和VLA基线。至关重要的是,它放大了数据缩放定律,表明随着训练数据集规模的增加,性能提升会加速。