Vision Language Action (VLA) models promise an open-vocabulary interface that can translate perceptual ambiguity into semantically grounded driving decisions, yet they still treat language as a static prior fixed at inference time. As a result, the model must infer continuously shifting objectives from pixels alone, yielding delayed or overly conservative maneuvers. We argue that effective VLAs for autonomous driving need an online channel in which users can influence driving with specific intentions. To this end, we present EchoVLA, a user-aware VLA that couples camera streams with in situ audio instructions. We augment the nuScenes dataset with temporally aligned, intent-specific speech commands generated by converting ego-motion descriptions into synthetic audios. Further, we compose emotional speech-trajectory pairs into a multimodal Chain-of-Thought (CoT) for fine-tuning a Multimodal Large Model (MLM) based on Qwen2.5-Omni. Specifically, we synthesize the audio-augmented dataset with different emotion types paired with corresponding driving behaviors, leveraging the emotional cues embedded in tone, pitch, and speech tempo to reflect varying user states, such as urgent or hesitant intentions, thus enabling our EchoVLA to interpret not only the semantic content but also the emotional context of audio commands for more nuanced and emotionally adaptive driving behavior. In open-loop benchmarks, our approach reduces the average L2 error by $59.4\%$ and the collision rate by $74.4\%$ compared to the baseline of vision-only perception. More experiments on nuScenes dataset validate that EchoVLA not only steers the trajectory through audio instructions, but also modulates driving behavior in response to the emotions detected in the user's speech.
翻译:视觉语言动作(VLA)模型有望提供一个开放词汇的接口,将感知模糊性转化为语义明确的驾驶决策,然而它们仍将语言视为推理时固定的静态先验。因此,模型必须仅从像素中推断持续变化的目标,导致延迟或过于保守的驾驶操作。我们认为,用于自动驾驶的有效VLA需要一个在线通道,使用户能够通过特定意图影响驾驶。为此,我们提出了EchoVLA,一种用户感知的VLA,它将摄像头流与现场音频指令相耦合。我们通过将自车运动描述转换为合成音频,为nuScenes数据集增加了时间对齐、意图特定的语音指令。此外,我们将情感语音-轨迹对组合成一个多模态思维链(CoT),用于基于Qwen2.5-Omni的多模态大模型(MLM)进行微调。具体而言,我们合成了音频增强数据集,其中不同的情感类型与相应的驾驶行为配对,利用嵌入在语调、音高和语速中的情感线索来反映变化的用户状态(例如紧急或犹豫的意图),从而使我们的EchoVLA不仅能解释音频指令的语义内容,还能理解其情感上下文,以实现更细致和情感自适应的驾驶行为。在开环基准测试中,与仅基于视觉感知的基线相比,我们的方法将平均L2误差降低了$59.4\%$,碰撞率降低了$74.4\%$。在nuScenes数据集上的更多实验验证了EchoVLA不仅能够通过音频指令引导轨迹,还能根据检测到的用户语音情感调节驾驶行为。