Recent work on distilling Whisper's knowledge into small models using pseudo-labels shows promising performance while reducing the size by up to 50\%. This results in small, efficient, and dedicated models. However, a critical step of distillation from pseudo-labels involves filtering high-quality predictions and using only those during training. This step requires ground truth to compare and filter bad examples making the whole process supervised. In addition to that, the distillation process requires a large amount of data thereby limiting the ability to distil models in low-resource settings. To address this challenge, we propose an unsupervised or label-free framework for distillation, thus eliminating the requirement for labeled data altogether. Through experimentation, we show that our best distilled models outperform the teacher model by 5-7 points in terms of WER. Additionally, our models are on par with or better than similar supervised data filtering setup. When we scale the data, our models significantly outperform all zero-shot and supervised models. In this work, we demonstrate that it's possible to distill large Whisper models into relatively small models without using any labeled data. As a result, our distilled models are 25-50\% more compute and memory efficient while maintaining performance equal to or better than the teacher model.
翻译:近期研究利用伪标签将Whisper的知识蒸馏至小型模型,在模型尺寸缩减高达50%的同时展现出优异的性能,从而得到小型、高效且专用的模型。然而,基于伪标签的蒸馏过程包含一个关键步骤:需筛选高质量预测结果并仅将其用于训练。该步骤需要真实标注数据以对比并过滤不良样本,使得整个流程成为有监督过程。此外,蒸馏过程需要大量数据,这限制了在低资源场景下进行模型蒸馏的能力。为应对这一挑战,我们提出一种无监督或无标签的蒸馏框架,从而完全消除对标注数据的需求。实验表明,我们最优的蒸馏模型在词错误率(WER)上比教师模型提升5-7个百分点。同时,我们的模型性能与采用类似有监督数据过滤设置的方法相当或更优。当扩大数据规模时,我们的模型显著超越所有零样本和有监督模型。本工作证明,无需使用任何标注数据即可将大型Whisper模型蒸馏为相对较小的模型。最终,我们的蒸馏模型在计算效率和内存占用上提升25-50%,同时保持与教师模型相当或更优的性能。