Recent studies have successfully shown that large language models (LLMs) can be successfully used for generative error correction (GER) on top of the automatic speech recognition (ASR) output. Specifically, an LLM is utilized to carry out a direct mapping from the N-best hypotheses list generated by an ASR system to the predicted output transcription. However, despite its effectiveness, GER introduces extra data uncertainty since the LLM is trained without taking into account acoustic information available in the speech signal. In this work, we aim to overcome such a limitation by infusing acoustic information before generating the predicted transcription through a novel late fusion solution termed Uncertainty-Aware Dynamic Fusion (UADF). UADF is a multimodal fusion approach implemented into an auto-regressive decoding process and works in two stages: (i) It first analyzes and calibrates the token-level LLM decision, and (ii) it then dynamically assimilates the information from the acoustic modality. Experimental evidence collected from various ASR tasks shows that UADF surpasses existing fusion mechanisms in several ways. It yields significant improvements in word error rate (WER) while mitigating data uncertainty issues in LLM and addressing the poor generalization relied with sole modality during fusion. We also demonstrate that UADF seamlessly adapts to audio-visual speech recognition.
翻译:近期研究已成功证明,大型语言模型可有效用于自动语音识别输出的生成式纠错。具体而言,通过语言模型直接将ASR系统生成的N-best假设列表映射为预测输出转录文本。然而,尽管生成式纠错效果显著,由于语言模型在训练时未考虑语音信号中的声学信息,该方法引入了额外的数据不确定性。本研究旨在通过创新的晚期融合方案——不确定性感知动态融合(UADF)在生成预测转录前注入声学信息,从而突破这一局限。UADF是一种集成于自回归解码过程中的多模态融合方法,其工作分为两个阶段:(一)首先分析并校准词元级语言模型决策;(二)随后动态融合声学模态信息。基于多种ASR任务的实验证据表明,UADF在多个维度超越现有融合机制:在显著降低词错误率的同时,缓解了语言模型中的数据不确定性问题,并解决了单一模态融合时泛化能力不足的缺陷。我们还证明UADF可无缝适配音视频语音识别任务。