Conformer-based attention models have become the de facto backbone model for Automatic Speech Recognition tasks. A blank symbol is usually introduced to align the input and output sequences for CTC or RNN-T models. Unfortunately, the long input length overloads computational budget and memory consumption quadratically by attention mechanism. In this work, we propose a "Skip-and-Recover" Conformer architecture, named Skipformer, to squeeze sequence input length dynamically and inhomogeneously. Skipformer uses an intermediate CTC output as criteria to split frames into three groups: crucial, skipping and ignoring. The crucial group feeds into next conformer blocks and its output joint with skipping group by original temporal order as the final encoder output. Experiments show that our model reduces the input sequence length by 31 times on Aishell-1 and 22 times on Librispeech corpus. Meanwhile, the model can achieve better recognition accuracy and faster inference speed than recent baseline models. Our code is open-sourced and available online.
翻译:基于Conformer的注意力模型已成为自动语音识别任务的事实主干模型。通常引入空白符号来对齐CTC或RNN-T模型的输入与输出序列。然而,长输入长度使得注意力机制以二次方形式加重计算预算和内存消耗。本文提出一种“跳跃与恢复”Conformer架构,命名为Skipformer,用于动态且非均匀地压缩序列输入长度。Skipformer以中间CTC输出为标准将帧划分为三组:关键组、跳跃组和忽略组。关键组输入后续Conformer模块,其输出与跳跃组按原始时间顺序合并作为最终编码器输出。实验表明,我们的模型在Aishell-1语料库上将输入序列长度压缩31倍,在Librispeech语料库上压缩22倍。同时,模型在识别准确率和推理速度上均优于近期基线模型。我们的代码已开源并提供在线访问。