This paper explores sequence-level knowledge distillation (KD) of multilingual pre-trained encoder-decoder translation models. We argue that the teacher model's output distribution holds valuable insights for the student, beyond the approximated mode obtained through beam search (the standard decoding method), and present Multi-Hypothesis Distillation (MHD), a sequence-level KD method that generates multiple translations for each source sentence. This provides a larger representation of the teacher model distribution and exposes the student model to a wider range of target-side prefixes. We leverage $n$-best lists from beam search to guide the student's learning and examine alternative decoding methods to address issues like low variability and the under-representation of infrequent tokens. For low-resource languages, our research shows that while sampling methods may slightly compromise translation quality compared to beam search based approaches, they enhance the generated corpora with greater variability and lexical richness. This ultimately improves student model performance and mitigates the gender bias amplification often associated with KD.
翻译:本文探讨了多语言预训练编码器-解码器翻译模型的序列级知识蒸馏(KD)。我们认为教师模型的输出分布蕴含了对学生模型有价值的信息,这些信息超越了通过集束搜索(标准解码方法)获得的近似模式,并提出了多假设蒸馏(MHD)——一种为每个源句生成多个翻译的序列级知识蒸馏方法。这提供了教师模型分布的更广泛表示,并使学生模型接触到更广泛的目标端前缀。我们利用集束搜索产生的$n$-best列表来指导学生的学习,并研究了替代解码方法,以解决诸如低变异性和低频词元代表性不足等问题。对于低资源语言,我们的研究表明,虽然基于采样方法可能相比基于集束搜索的方法在翻译质量上略有妥协,但它们通过更高的变异性和词汇丰富性增强了生成的语料库。这最终提高了学生模型的性能,并缓解了知识蒸馏中常见的性别偏见放大问题。