Large language models (LLM) have demonstrated the ability to understand human language by leveraging large amount of text data. Automatic speech recognition (ASR) systems are often limited by available transcribed speech data and benefit from a second pass rescoring using LLM. Recently multi-modal large language models, particularly speech and text foundational models have demonstrated strong spoken language understanding. Speech-Text foundational models leverage large amounts of unlabelled and labelled data both in speech and text modalities to model human language. In this work, we propose novel techniques to use multi-modal LLM for ASR rescoring. We also explore discriminative training to further improve the foundational model rescoring performance. We demonstrate cross-modal knowledge transfer in speech-text LLM can benefit rescoring. Our experiments demonstrate up-to 20% relative improvements over Whisper large ASR and up-to 15% relative improvements over text-only LLM.
翻译:大型语言模型通过利用海量文本数据,已展现出理解人类语言的能力。自动语音识别系统常受限于可用的转录语音数据,而利用大型语言模型进行第二遍重打分可显著提升其性能。近年来,多模态大型语言模型(特别是语音与文本基础模型)在口语理解任务中表现出强大能力。语音-文本基础模型通过利用语音和文本模态中大量未标注与已标注数据,实现对人类语言的建模。本研究提出创新技术,将多模态大型语言模型应用于语音识别重打分任务。同时探索了判别式训练方法,以进一步提升基础模型的重打分性能。我们证明了语音-文本大型语言模型中的跨模态知识迁移能够有效改善重打分效果。实验结果表明:相较于Whisper大型语音识别系统,本方法可获得最高20%的相对性能提升;相较于纯文本大型语言模型,可获得最高15%的相对改进。