Instruction-grounded driving, where passenger language guides trajectory planning, requires vehicles to understand intent before motion. However, most prior instruction-following planners rely on simulation or fixed command vocabularies, limiting real-world generalization. doScenes, the first real-world dataset linking free-form instructions (with referentiality) to nuScenes ground-truth motion, enables instruction-conditioned planning. In this work, we adapt OpenEMMA, an open-source MLLM-based end-to-end driving framework that ingests front-camera views and ego-state and outputs 10-step speed-curvature trajectories, to this setting, presenting a reproducible instruction-conditioned baseline on doScenes and investigate the effects of human instruction prompts on predicted driving behavior. We integrate doScenes directives as passenger-style prompts within OpenEMMA's vision-language interface, enabling linguistic conditioning before trajectory generation. Evaluated on 849 annotated scenes using ADE, we observe that instruction conditioning substantially improves robustness by preventing extreme baseline failures, yielding a 98.7% reduction in mean ADE. When such outliers are removed, instructions still influence trajectory alignment, with well-phrased prompts improving ADE by up to 5.1%. We use this analysis to discuss what makes a "good" instruction for the OpenEMMA framework. We release the evaluation prompts and scripts to establish a reproducible baseline for instruction-aware planning. GitHub: https://github.com/Mi3-Lab/doScenes-VLM-Planning
翻译:指令驱动的自动驾驶要求车辆在运动前理解乘客语言所传达的意图。然而,现有大多数遵循指令的规划器依赖于仿真环境或固定指令词汇,限制了其在真实场景中的泛化能力。doScenes作为首个将自由形式指令(具备指代性)与nuScenes真实运动数据关联的现实世界数据集,为实现指令条件化规划提供了可能。本研究将开源的多模态大语言模型端到端驾驶框架OpenEMMA适配于此场景,该框架通过前视摄像头与自车状态输入,输出10步速度-曲率轨迹。我们在doScenes数据集上建立了可复现的指令条件化基准,并探究人类指令提示对预测驾驶行为的影响。通过将doScenes指令以乘客风格提示的形式集成至OpenEMMA的视觉-语言接口,实现了轨迹生成前的语言条件化控制。基于849个标注场景采用平均位移误差(ADE)进行评估,结果显示指令条件化能显著提升系统鲁棒性:极端基线故障被有效抑制,平均ADE降低98.7%。即使排除异常值后,指令仍能改善轨迹对齐质量,表述恰当的提示可使ADE提升达5.1%。基于此分析,我们探讨了适用于OpenEMMA框架的“优质指令”特征。本研究公开了评估提示词与脚本,旨在为指令感知规划建立可复现的基准。项目地址:https://github.com/Mi3-Lab/doScenes-VLM-Planning