The recent advancements in large language models (LLMs) have revolutionized the field of natural language processing, progressively broadening their scope to multimodal perception and generation. However, effectively integrating listening capabilities into LLMs poses significant challenges, particularly with respect to generalizing across varied contexts and executing complex auditory tasks. In this work, we introduce WavLLM, a robust and adaptive speech large language model with dual encoders, and a prompt-aware LoRA weight adapter, optimized by a two-stage curriculum learning approach. Leveraging dual encoders, we decouple different types of speech information, utilizing a Whisper encoder to process the semantic content of speech, and a WavLM encoder to capture the unique characteristics of the speaker's identity. Within the curriculum learning framework, WavLLM first builds its foundational capabilities by optimizing on mixed elementary single tasks, followed by advanced multi-task training on more complex tasks such as combinations of the elementary tasks. To enhance the flexibility and adherence to different tasks and instructions, a prompt-aware LoRA weight adapter is introduced in the second advanced multi-task training stage. We validate the proposed model on universal speech benchmarks including tasks such as ASR, ST, SV, ER, and also apply it to specialized datasets like Gaokao English listening comprehension set for SQA, and speech Chain-of-Thought (CoT) evaluation set. Experiments demonstrate that the proposed model achieves state-of-the-art performance across a range of speech tasks on the same model size, exhibiting robust generalization capabilities in executing complex tasks using CoT approach. Furthermore, our model successfully completes Gaokao tasks without specialized training. The codes, models, audio, and Gaokao evaluation set can be accessed at \url{aka.ms/wavllm}.
翻译:近年来,大语言模型(LLMs)的进展彻底改变了自然语言处理领域,并逐步将其应用范围扩展至多模态感知与生成。然而,将听觉能力有效整合到LLMs中仍面临重大挑战,尤其是在泛化至多样化语境以及执行复杂听觉任务方面。本工作中,我们提出了WavLLM,一个鲁棒且自适应的语音大语言模型。它采用双编码器架构,并配备一个提示感知的LoRA权重适配器,通过两阶段课程学习策略进行优化。利用双编码器,我们解耦了不同类型的语音信息:使用Whisper编码器处理语音的语义内容,同时使用WavLM编码器捕捉说话人身份的唯一特征。在课程学习框架内,WavLLM首先通过在混合的基础单一任务上进行优化来构建其基础能力,随后在更复杂的任务(例如基础任务的组合)上进行高级多任务训练。为了增强模型对不同任务和指令的灵活性与遵循能力,我们在第二阶段的高级多任务训练中引入了提示感知的LoRA权重适配器。我们在通用语音基准测试(包括ASR、ST、SV、ER等任务)上验证了所提模型,并将其应用于专门数据集,如用于SQA的高考英语听力理解集和语音思维链(CoT)评估集。实验表明,所提模型在相同模型规模下,在一系列语音任务上取得了最先进的性能,并在使用CoT方法执行复杂任务时展现出强大的泛化能力。此外,我们的模型无需专门训练即可成功完成高考任务。代码、模型、音频及高考评估集可通过 \url{aka.ms/wavllm} 获取。