The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.
翻译:大型语言模型(LLM)取得的卓越性能推动了利用其处理广泛任务与输入模态的研究。在语音到文本(S2T)任务中,新兴解决方案通过适配器模块将语音基础模型(SFM)编码器的输出投影至LLM的嵌入空间。然而,目前尚无研究系统探讨下游任务性能在多大程度上依赖于各组件(SFM、适配器、LLM),亦未验证适配器的最佳设计是否取决于所选SFM与LLM。为填补这一空白,我们在两项广泛应用的S2T任务(自动语音识别与语音翻译)上评估了5种适配器模块、2种LLM(Mistral与Llama)及2种SFM(Whisper与SeamlessM4T)的组合效果。实验结果表明:SFM对下游性能起决定性作用,适配器选择的影响相对有限且其效果取决于具体SFM与LLM的搭配。