Recent end-to-end speech language models (SLMs) have expanded upon the capabilities of large language models (LLMs) by incorporating pre-trained speech models. However, these SLMs often undergo extensive speech instruction-tuning to bridge the gap between speech and text modalities. This requires significant annotation efforts and risks catastrophic forgetting of the original language capabilities. In this work, we present a simple yet effective automatic process for creating speech-text pair data that carefully injects speech paralinguistic understanding abilities into SLMs while preserving the inherent language capabilities of the text-based LLM. Our model demonstrates general capabilities for speech-related tasks without the need for speech instruction-tuning data, achieving impressive performance on Dynamic-SUPERB and AIR-Bench-Chat benchmarks. Furthermore, our model exhibits the ability to follow complex instructions derived from LLMs, such as specific output formatting and chain-of-thought reasoning. Our approach not only enhances the versatility and effectiveness of SLMs but also reduces reliance on extensive annotated datasets, paving the way for more efficient and capable speech understanding systems.
翻译:近年来,端到端语音语言模型通过整合预训练语音模型,扩展了大语言模型的能力。然而,这些语音语言模型通常需要进行大量的语音指令微调,以弥合语音与文本模态之间的差距。这需要大量的标注工作,并存在灾难性遗忘原始语言能力的风险。在本工作中,我们提出了一种简单而有效的自动流程,用于创建语音-文本配对数据,该流程将语音副语言理解能力谨慎地注入语音语言模型,同时保留了基于文本的大语言模型的固有语言能力。我们的模型展示了处理语音相关任务的通用能力,而无需语音指令微调数据,在Dynamic-SUPERB和AIR-Bench-Chat基准测试中取得了令人印象深刻的性能。此外,我们的模型展现出遵循源自大语言模型的复杂指令的能力,例如特定的输出格式和思维链推理。我们的方法不仅增强了语音语言模型的多功能性和有效性,还减少了对大量标注数据集的依赖,为开发更高效、能力更强的语音理解系统铺平了道路。