Neural contextual biasing allows speech recognition models to leverage contextually relevant information, leading to improved transcription accuracy. However, the biasing mechanism is typically based on a cross-attention module between the audio and a catalogue of biasing entries, which means computational complexity can pose severe practical limitations on the size of the biasing catalogue and consequently on accuracy improvements. This work proposes an approximation to cross-attention scoring based on vector quantization and enables compute- and memory-efficient use of large biasing catalogues. We propose to use this technique jointly with a retrieval based contextual biasing approach. First, we use an efficient quantized retrieval module to shortlist biasing entries by grounding them on audio. Then we use retrieved entries for biasing. Since the proposed approach is agnostic to the biasing method, we investigate using full cross-attention, LLM prompting, and a combination of the two. We show that retrieval based shortlisting allows the system to efficiently leverage biasing catalogues of several thousands of entries, resulting in up to 71% relative error rate reduction in personal entity recognition. At the same time, the proposed approximation algorithm reduces compute time by 20% and memory usage by 85-95%, for lists of up to one million entries, when compared to standard dot-product cross-attention.
翻译:神经上下文偏置技术使语音识别模型能够利用上下文相关信息,从而提升转录准确率。然而,偏置机制通常基于音频与偏置条目目录之间的交叉注意力模块,这意味着计算复杂度会严重限制偏置目录的规模,进而制约准确率的提升。本研究提出一种基于向量量化的交叉注意力评分近似方法,实现了对大规模偏置目录的高计算效率与低内存占用。我们建议将该技术与基于检索的上下文偏置方法结合使用:首先通过高效的量化检索模块依据音频内容筛选偏置条目,随后利用检索结果进行偏置处理。由于该方法对偏置实现方式具有普适性,我们探索了完整交叉注意力、大语言模型提示以及二者结合的方案。实验表明,基于检索的筛选机制可使系统高效利用包含数千条目的偏置目录,在个人实体识别任务中实现高达71%的相对错误率降低。同时,与标准点积交叉注意力相比,所提出的近似算法在处理百万级条目列表时,可减少20%的计算时间与85-95%的内存占用。