We present Poutine, a 3B-parameter vision-language model (VLM) tailored for end-to-end autonomous driving in long-tail driving scenarios. Poutine is trained in two stages. To obtain strong base driving capabilities, we train Poutine-Base in a self-supervised vision-language-trajectory (VLT) next-token prediction fashion on 83 hours of CoVLA nominal driving and 11 hours of Waymo long-tail driving. Accompanying language annotations are auto-generated with a 72B-parameter VLM. Poutine is obtained by fine-tuning Poutine-Base with Group Relative Policy Optimization (GRPO) using less than 500 preference-labeled frames from the Waymo validation set. We show that both VLT pretraining and RL fine-tuning are critical to attain strong driving performance in the long-tail. Poutine-Base achieves a rater-feedback score (RFS) of 8.12 on the validation set, nearly matching Waymo's expert ground-truth RFS. The final Poutine model achieves an RFS of 7.99 on the official Waymo test set, placing 1st in the 2025 Waymo Vision-Based End-to-End Driving Challenge by a significant margin. These results highlight the promise of scalable VLT pre-training and lightweight RL fine-tuning to enable robust and generalizable autonomy.
翻译:我们提出Poutine,一个专为长尾驾驶场景下的端到端自动驾驶设计的30亿参数视觉语言模型。Poutine的训练分为两个阶段。为获得强大的基础驾驶能力,我们在83小时CoVLA常规驾驶数据和11小时Waymo长尾驾驶数据上,以自监督的视觉-语言-轨迹下一令牌预测方式训练了Poutine-Base。伴随的语言标注由720亿参数视觉语言模型自动生成。Poutine通过对Poutine-Base使用组相对策略优化进行微调获得,仅使用了Waymo验证集中不足500帧偏好标注数据。研究表明,视觉-语言-轨迹预训练和强化学习微调对于在长尾场景中获得强大驾驶性能均至关重要。Poutine-Base在验证集上获得8.12的评分员反馈分数,几乎与Waymo专家真值评分持平。最终Poutine模型在官方Waymo测试集上获得7.99的RFS分数,以显著优势位列2025年Waymo基于视觉的端到端驾驶挑战赛榜首。这些结果凸显了可扩展的视觉-语言-轨迹预训练与轻量级强化学习微调在实现鲁棒且可泛化自动驾驶方面的潜力。