The attention-based encoder-decoder (AED) speech recognition model has been widely successful in recent years. However, the joint optimization of acoustic model and language model in end-to-end manner has created challenges for text adaptation. In particular, effective, quick and inexpensive adaptation with text input has become a primary concern for deploying AED systems in the industry. To address this issue, we propose a novel model, the hybrid attention-based encoder-decoder (HAED) speech recognition model that preserves the modularity of conventional hybrid automatic speech recognition systems. Our HAED model separates the acoustic and language models, allowing for the use of conventional text-based language model adaptation techniques. We demonstrate that the proposed HAED model yields 23% relative Word Error Rate (WER) improvements when out-of-domain text data is used for language model adaptation, with only a minor degradation in WER on a general test set compared with the conventional AED model.
翻译:近年来,基于注意力的编码器-解码器(AED)语音识别模型取得了广泛成功。然而,声学模型与语言模型以端到端方式的联合优化为文本适配带来了挑战。特别是在工业场景部署AED系统时,如何通过文本输入实现高效、快速且低成本的适配已成为核心问题。为应对这一挑战,本文提出一种新颖的混合注意力编码器-解码器(HAED)语音识别模型,该模型保留了传统混合式自动语音识别系统的模块化特性。HAED模型将声学模型与语言模型解耦,从而能够直接应用基于文本的传统语言模型适配技术。实验表明,当使用领域外文本数据进行语言模型适配时,所提出的HAED模型可实现23%的词错误率(WER)相对提升,且与常规AED模型相比,在通用测试集上仅出现轻微的WER性能下降。