Adapting large language models (LLMs) to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT). However, this CT-then-SFT approach struggles with limited data in the context of low-resource languages, failing to balance language modeling and task-solving capabilities. We thus propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training. We use model merging to develop task-solving LLMs for low-resource languages without SFT data in the target languages. Our experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data. Observing performance saturation in model merging with more training tokens, we further analyze the merging process and introduce a slack variable to the model merging algorithm to mitigate the loss of important parameters, thereby enhancing performance. We hope that model merging can benefit more human languages suffering from data scarcity with its higher data efficiency.
翻译:使大型语言模型适应新语言通常涉及持续预训练与监督微调的结合。然而,这种先持续预训练后监督微调的方法在低资源语言数据有限的背景下效果不佳,难以平衡语言建模与任务解决能力。因此,我们提出将模型融合作为低资源语言的一种替代方案,将具有不同能力的模型合并为单一模型,无需额外训练。我们利用模型融合为低资源语言开发具备任务解决能力的大型语言模型,且无需目标语言的监督微调数据。基于Llama-2-7B的实验表明,模型融合能有效赋予低资源语言大型语言模型任务解决能力,在数据极度稀缺的场景中优于先持续预训练后监督微调的方法。观察到模型融合性能随训练词元增加趋于饱和后,我们进一步分析融合过程,并在模型融合算法中引入松弛变量以减轻重要参数损失,从而提升性能。我们希望模型融合凭借其更高的数据效率,能够惠及更多受数据稀缺困扰的人类语言。