Recent advances in automatic speech recognition (ASR) often rely on large speech foundation models for generating high-quality transcriptions. However, these models can be impractical due to limited computing resources. The situation is even more severe in terms of more realistic or difficult scenarios, such as code-switching ASR (CS-ASR). To address this, we present a framework for developing more efficient models for CS-ASR through knowledge distillation using realistic speech-only data. Our proposed method, Leave No Knowledge Behind During Knowledge Distillation (K$^2$D), leverages both the teacher model's knowledge and additional insights from a small auxiliary model. We evaluate our approach on two in-domain and two out-domain datasets, demonstrating that K$^2$D is effective. By conducting K$^2$D on the unlabeled realistic data, we have successfully obtained a 2-time smaller model with 5-time faster generation speed while outperforming the baseline methods and the teacher model on all the testing sets. We have made our model publicly available on Hugging Face (https://huggingface.co/andybi7676/k2d-whisper.zh-en).
翻译:近年来,自动语音识别(ASR)领域的进展常依赖于大型语音基础模型来生成高质量转录文本。然而,受限于计算资源,这些模型的实际部署往往面临困难。在更现实或更复杂的场景中——例如语码转换语音识别(CS-ASR)——这一问题尤为严峻。为此,我们提出一个框架,通过利用现实场景中纯语音数据进行知识蒸馏,为CS-ASR开发更高效的模型。我们提出的方法——知识蒸馏中不留知识空白(K$^2$D)——同时利用了教师模型的知识与小型辅助模型的额外洞察。我们在两个领域内数据集和两个领域外数据集上评估了该方法,证明K$^2$D具有显著效果。通过对未标注的现实数据进行K$^2$D训练,我们成功获得了体积缩小2倍、生成速度提升5倍的模型,且在全部测试集上均超越基线方法及教师模型。我们已将模型公开发布于Hugging Face平台(https://huggingface.co/andybi7676/k2d-whisper.zh-en)。