This paper explores sequence-level knowledge distillation (KD) of multilingual pre-trained encoder-decoder translation models. We argue that the teacher model's output distribution holds valuable insights for the student, beyond the approximated mode obtained through beam search (the standard decoding method), and present Multi-Hypothesis Distillation (MHD), a sequence-level KD method that generates multiple translations for each source sentence. This provides a larger representation of the teacher model distribution and exposes the student model to a wider range of target-side prefixes. We leverage $n$-best lists from beam search to guide the student's learning and examine alternative decoding methods to address issues like low variability and the under-representation of infrequent tokens. For low-resource languages, our research shows that while sampling methods may slightly compromise translation quality compared to beam search based approaches, they enhance the generated corpora with greater variability and lexical richness. This ultimately improves student model performance and mitigates the gender bias amplification often associated with KD.
翻译:本文探讨了多语言预训练编码器-解码器翻译模型的序列级知识蒸馏(KD)。我们认为教师模型的输出分布蕴含着超越束搜索(标准解码方法)所得近似模式的重要信息,并提出了多假设蒸馏(MHD)——一种为每个源语句生成多个翻译的序列级知识蒸馏方法。该方法提供了教师模型分布的更广泛表征,使学生模型能接触到更丰富的目标端前缀。我们利用束搜索产生的$n$-best列表指导学生模型学习,并研究了替代解码方法以解决多样性不足和低频词表征不充分等问题。针对低资源语言,研究表明:虽然基于采样方法生成的语料在翻译质量上可能略逊于基于束搜索的方法,但其通过更高的多样性和词汇丰富性增强了生成语料的质量。这最终提升了学生模型的性能,并缓解了知识蒸馏过程中常见的性别偏见放大问题。