The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.
翻译:大型语言模型(LLM)取得的显著性能推动了研究努力,以将其应用于广泛的任务和输入模态。在语音到文本(S2T)任务中,新兴的解决方案是通过适配器模块将语音基础模型(SFM)编码器的输出投影到LLM的嵌入空间中。然而,目前尚无研究深入探讨下游任务性能在多大程度上依赖于各个组件(SFM、适配器、LLM),也未验证适配器的最佳设计是否取决于所选的SFM和LLM。为填补这一空白,我们评估了5种适配器模块、2种LLM(Mistral和Llama)以及2种SFM(Whisper和SeamlessM4T)在两项广泛应用的S2T任务——自动语音识别和语音翻译——上的组合效果。我们的结果表明,SFM对下游性能起着关键作用,而适配器的选择影响适中,且其效果取决于具体的SFM和LLM。