Modern speech processing systems rely on self-attention. Unfortunately, token mixing with self-attention takes quadratic time in the length of the speech utterance, slowing down inference and training and increasing memory consumption. Cheaper alternatives to self-attention for ASR have been developed, but they fail to consistently reach the same level of accuracy. This paper, therefore, proposes a novel linear-time alternative to self-attention. It summarises an utterance with the mean over vectors for all time steps. This single summary is then combined with time-specific information. We call this method "SummaryMixing". Introducing SummaryMixing in state-of-the-art ASR models makes it feasible to preserve or exceed previous speech recognition performance while making training and inference up to 28% faster and reducing memory use by half.
翻译:现代语音处理系统依赖于自注意力机制。然而,自注意力机制中的令牌混合在语音话语长度上需要二次方时间,这降低了推理和训练速度,并增加了内存消耗。针对自动语音识别(ASR)的自注意力廉价替代方案已有开发,但它们未能始终达到相同的准确度水平。因此,本文提出了一种新颖的线性时间自注意力替代方案。该方法通过对所有时间步的向量取均值来概括一个话语。然后将此单一摘要与时间特定信息相结合。我们称此方法为"SummaryMixing"。在先进的ASR模型中引入SummaryMixing,可以在保持或超越先前语音识别性能的同时,使训练和推理速度提升高达28%,并将内存使用减少一半。