Recent studies have augmented large language models (LLMs) with speech capabilities, leading to the development of speech language models (SpeechLMs). Earlier SpeechLMs focused on single-turn speech-based question answering (QA), where user input comprised a speech context and a text question. More recent studies have extended this to multi-turn conversations, though they often require complex, multi-stage supervised fine-tuning (SFT) with diverse data. Another critical challenge with SpeechLMs is catastrophic forgetting-where models optimized for speech tasks suffer significant degradation in text-only performance. To mitigate these issues, we propose a novel single-stage joint speech-text SFT approach on the low-rank adaptation (LoRA) of the LLM backbone. Our joint SFT combines text-only SFT data with three types of speech-related data: speech recognition and translation, speech-based QA, and mixed-modal SFT. Compared to previous SpeechLMs with 7B or 13B parameters, our 3B model demonstrates superior performance across various speech benchmarks while preserving the original capabilities on text-only tasks. Furthermore, our model shows emergent abilities of effectively handling previously unseen prompts and tasks, including multi-turn, mixed-modal inputs.
翻译:近期研究通过为大型语言模型(LLM)增强语音能力,推动了语音语言模型(SpeechLM)的发展。早期的SpeechLM主要关注单轮基于语音的问答(QA),其用户输入包含语音上下文和文本问题。更近期的研究已将其扩展至多轮对话,但通常需要复杂、多阶段且数据多样化的监督微调(SFT)。SpeechLM面临的另一个关键挑战是灾难性遗忘——即针对语音任务优化的模型在纯文本任务上的性能会出现显著下降。为缓解这些问题,我们提出了一种新颖的单阶段联合语音-文本SFT方法,应用于LLM主干网络的低秩适配(LoRA)。我们的联合SFT将纯文本SFT数据与三类语音相关数据相结合:语音识别与翻译、基于语音的QA以及混合模态SFT。与先前参数量为7B或13B的SpeechLM相比,我们的3B模型在各种语音基准测试中展现出更优的性能,同时保留了在纯文本任务上的原始能力。此外,我们的模型还表现出有效处理先前未见过的提示和任务(包括多轮、混合模态输入)的涌现能力。