Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 architectures using the SacreBLEU score. We showed that 6 PEFT architectures outperform the baseline for both in-domain and out-domain tests and the Houlsby+Inversion adapter has the best performance overall, proving the effectiveness of PEFT methods.
翻译:参数高效微调(PEFT)方法在适应大规模预训练语言模型以完成多样化任务方面日益重要,在适应性与计算效率之间实现了平衡。这些方法在低资源语言神经机器翻译(NMT)中至关重要,能以最小资源提升翻译准确性。然而,其实际效果在不同语言间存在显著差异。我们针对不同的低资源语言领域和规模开展了全面的实证实验,使用SacreBLEU评分评估了8种PEFT方法(共15种架构)的性能。结果表明,6种PEFT架构在领域内和领域外测试中均优于基线,其中Houlsby+Inversion适配器整体表现最佳,验证了PEFT方法的有效性。