Large language models (LLMs) have made great progress in classification and text generation tasks. However, they are mainly trained on English data and often struggle with low-resource languages. In this study, we explore adding a new language, i.e., Persian, to Llama (a model with a limited understanding of Persian) using parameter-efficient fine-tuning. We employ a multi-stage approach involving pretraining on monolingual Persian data, aligning representations through bilingual pretraining and instruction datasets, and instruction-tuning with task-specific datasets. We evaluate the model's performance at each stage on generation and classification tasks. Our findings suggest that incorporating the Persian language, through bilingual data alignment, can enhance classification accuracy for Persian tasks, with no adverse impact and sometimes even improvements on English tasks. Additionally, the results highlight the model's initial strength as a critical factor when working with limited training data, with cross-lingual alignment offering minimal benefits for the low-resource language. Knowledge transfer from English to Persian has a marginal effect, primarily benefiting simple classification tasks.
翻译:大型语言模型(LLMs)在分类和文本生成任务中取得了显著进展。然而,这些模型主要基于英语数据训练,在处理低资源语言时往往表现不佳。本研究探讨了如何通过参数高效微调方法,将新语言(即波斯语)添加到Llama(一个对波斯语理解有限的模型)中。我们采用多阶段方法,包括在单语波斯语数据上进行预训练、通过双语预训练与指令数据集进行表征对齐,以及使用任务特定数据集进行指令微调。我们在每个阶段评估模型在生成和分类任务上的性能。研究结果表明,通过双语数据对齐融入波斯语,能够提升波斯语任务的分类准确率,且对英语任务无负面影响,有时甚至能带来改进。此外,结果突显了模型初始能力在处理有限训练数据时的关键作用,跨语言对齐对低资源语言的益处有限。从英语到波斯语的知识迁移效果微弱,主要对简单分类任务有所助益。