In this paper, we propose two novel approaches, which integrate long-content information into the factorized neural transducer (FNT) based architecture in both non-streaming (referred to as LongFNT ) and streaming (referred to as SLongFNT ) scenarios. We first investigate whether long-content transcriptions can improve the vanilla conformer transducer (C-T) models. Our experiments indicate that the vanilla C-T models do not exhibit improved performance when utilizing long-content transcriptions, possibly due to the predictor network of C-T models not functioning as a pure language model. Instead, FNT shows its potential in utilizing long-content information, where we propose the LongFNT model and explore the impact of long-content information in both text (LongFNT-Text) and speech (LongFNT-Speech). The proposed LongFNT-Text and LongFNT-Speech models further complement each other to achieve better performance, with transcription history proving more valuable to the model. The effectiveness of our LongFNT approach is evaluated on LibriSpeech and GigaSpeech corpora, and obtains relative 19% and 12% word error rate reduction, respectively. Furthermore, we extend the LongFNT model to the streaming scenario, which is named SLongFNT , consisting of SLongFNT-Text and SLongFNT-Speech approaches to utilize long-content text and speech information. Experiments show that the proposed SLongFNT model achieves relative 26% and 17% WER reduction on LibriSpeech and GigaSpeech respectively while keeping a good latency, compared to the FNT baseline. Overall, our proposed LongFNT and SLongFNT highlight the significance of considering long-content speech and transcription knowledge for improving both non-streaming and streaming speech recognition systems.
翻译:在本文中,我们提出了两种新颖方法,将长内容信息集成到基于因子化神经换能器(FNT)的架构中,分别适用于非流式(称为LongFNT)和流式(称为SLongFNT)场景。我们首先研究了长内容转录能否提升基础Conformer换能器(C-T)模型。实验表明,C-T模型在使用长内容转录时并未表现出性能提升,原因可能在于C-T模型的预测网络并非纯粹的语言模型。相比之下,FNT在利用长内容信息方面展现出潜力,为此我们提出了LongFNT模型,并探讨了文本(LongFNT-Text)和语音(LongFNT-Speech)两种长内容信息的影响。所提出的LongFNT-Text和LongFNT-Speech模型能进一步相互补充以取得更好性能,其中转录历史对模型更具价值。我们在LibriSpeech和GigaSpeech语料库上评估了LongFNT方法的有效性,分别获得了相对19%和12%的词错误率降低。此外,我们将LongFNT模型扩展到流式场景,命名为SLongFNT,包含SLongFNT-Text和SLongFNT-Speech方法以利用长内容文本和语音信息。实验表明,与FNT基线相比,所提出的SLongFNT模型在LibriSpeech和GigaSpeech上分别实现了相对26%和17%的词错误率降低,同时保持了良好的延迟。总体而言,我们提出的LongFNT和SLongFNT突显了考虑长内容语音和转录知识对于改进非流式和流式语音识别系统的重要性。