Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in reduced dimensionality. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto frontier of accuracy and efficiency. The code of LiteASR is available at https://github.com/efeslab/LiteASR.
翻译:现代自动语音识别(ASR)模型,如OpenAI的Whisper,依赖于深度编码器-解码器架构,其编码器由于计算强度高而成为高效部署的关键瓶颈。我们提出了LiteASR,一种针对ASR编码器的低秩压缩方案,能在保持转录准确性的同时显著降低推理成本。我们的方法利用了在中间激活中观察到的强低秩特性:通过使用小型校准数据集进行主成分分析(PCA),我们用一系列低秩矩阵乘法来近似线性变换,并进一步优化自注意力机制以在降维空间中工作。评估结果表明,我们的方法可以将Whisper large-v3的编码器尺寸压缩超过50%,在达到与Whisper medium相当尺寸的同时获得更好的转录准确性,从而在准确性与效率之间建立了新的帕累托前沿。LiteASR的代码可在https://github.com/efeslab/LiteASR获取。