Whisper is a multitask and multilingual speech model covering 99 languages. It yields commendable automatic speech recognition (ASR) results in a subset of its covered languages, but the model still underperforms on a non-negligible number of under-represented languages, a problem exacerbated in smaller model versions. In this work, we propose DistilWhisper, an approach able to bridge the performance gap in ASR for these languages while retaining the advantages of multitask and multilingual capabilities. Our approach involves two key strategies: lightweight modular ASR fine-tuning of whisper-small using language-specific experts, and knowledge distillation from whisper-large-v2. This dual approach allows us to effectively boost ASR performance while keeping the robustness inherited from the multitask and multilingual pre-training. Results demonstrate that our approach is more effective than standard fine-tuning or LoRA adapters, boosting performance in the targeted languages for both in- and out-of-domain test sets, while introducing only a negligible parameter overhead at inference.
翻译:Whisper是一个覆盖99种语言的多任务和多语言语音模型。该模型在其覆盖的一部分语言上取得了良好的自动语音识别(ASR)结果,但在数量不可忽视的低资源语言上仍表现欠佳,这一问题在小型模型版本中更为严重。本文提出DistilWhisper方法,能够在保持多任务和多语言能力优势的同时,弥合这些语言在ASR任务上的性能差距。我们的方法包含两种关键策略:利用语言特定专家对whisper-small进行轻量级模块化ASR微调,以及从whisper-large-v2进行知识蒸馏。这种双重策略使我们能够有效提升ASR性能,同时保留多任务和多语言预训练带来的鲁棒性。结果表明,我们的方法优于标准微调或LoRA适配器,在目标语言的域内和域外测试集上均提升了性能,同时推理时仅引入了可忽略的参数开销。