Recent speech language models (SLMs) typically incorporate pre-trained speech models to extend the capabilities from large language models (LLMs). In this paper, we propose a Descriptive Speech-Text Alignment approach that leverages speech captioning to bridge the gap between speech and text modalities, enabling SLMs to interpret and generate comprehensive natural language descriptions, thereby facilitating the capability to understand both linguistic and non-linguistic features in speech. Enhanced with the proposed approach, our model demonstrates superior performance on the Dynamic-SUPERB benchmark, particularly in generalizing to unseen tasks. Moreover, we discover that the aligned model exhibits a zero-shot instruction-following capability without explicit speech instruction tuning. These findings highlight the potential to reshape instruction-following SLMs by incorporating rich, descriptive speech captions.
翻译:当前的语音语言模型通常通过集成预训练语音模型来扩展大语言模型的能力。本文提出一种描述性语音-文本对齐方法,该方法利用语音描述任务弥合语音与文本模态之间的鸿沟,使语音语言模型能够解析并生成完整的自然语言描述,从而同时理解语音中的语言与非语言特征。采用该方法增强后,我们的模型在Dynamic-SUPERB基准测试中展现出优越性能,尤其在未见任务的泛化能力方面表现突出。此外,我们发现对齐后的模型在没有经过显式语音指令微调的情况下,展现出零样本指令跟随能力。这些发现揭示了通过引入丰富的描述性语音描述来重塑指令跟随型语音语言模型的潜力。