Current speech LLMs largely perform implicit ASR: on tasks solvable from a transcript, they are behaviorally and mechanistically equivalent to simple Whisper$\to$LLM cascades. We show this through matched-backbone testing across four speech LLMs and six tasks, controlling for the LLM backbone for the first time. Ultravox is statistically indistinguishable from its matched cascade ($κ{=}0.93$); logit lens reveals literal text emerging in hidden states; LEACE concept erasure confirms text representations are causally necessary in both architectures tested, collapsing accuracy to near-zero. Qwen2-Audio genuinely diverges, revealing cascade equivalence is architecture-dependent, not universal. For most deployed use cases, current speech LLMs are expensive cascades, and under noise, they are worse ones, with clean-condition advantages reversing by up to 7.6% at 0 dB.
翻译:当前的语音大语言模型在很大程度上执行隐式自动语音识别:对于可通过文本转录解决的任务,它们在行为与机制上等价于简单的Whisper$\to$LLM级联系统。我们通过首次控制LLM主干网络,在四种语音大语言模型和六项任务上进行匹配主干测试,证实了这一点。Ultravox模型与其匹配级联在统计上无法区分($κ{=}0.93$);对数透镜技术揭示隐藏状态中浮现出字面文本;LEACE概念擦除证实文本表征在两种测试架构中均具有因果必要性,擦除后准确率坍缩至接近零。Qwen2-Audio模型展现出实质性差异,表明级联等价性具有架构依赖性而非普适规律。对于大多数实际应用场景,当前语音大语言模型实为高成本的级联系统,且在噪声环境下表现更差——其在纯净条件下的优势在0 dB信噪比时会发生高达7.6%的性能逆转。