Instruction-tuned large language models have demonstrated remarkable capabilities in following human instructions across various domains. However, their proficiency remains notably deficient in many low-resource languages. To address this challenge, we begin by introducing FarsInstruct a comprehensive instruction dataset designed to enhance the instruction following ability of large language models specifically for the Persian language a significant yet underrepresented language globally. FarsInstruct encompasses a wide range of task types and datasets, each containing a mix of straightforward to complex manual written instructions, as well as translations from the Public Pool of Prompts, ensuring a rich linguistic and cultural representation. Furthermore, we introduce Co-CoLA, a framework designed to enhance the multi-task adaptability of LoRA-tuned models. Through extensive experimental analyses, our study showcases the effectiveness of the FarsInstruct dataset coupled with training by the Co-CoLA framework, in improving the performance of large language models within the Persian context. As of the current writing, FarsInstruct comprises 197 templates across 21 distinct datasets, and we intend to update it consistently, thus augmenting its applicability.
翻译:指令微调的大语言模型已在多个领域展现出遵循人类指令的卓越能力。然而,其在许多低资源语言中的熟练度仍显著不足。为应对这一挑战,我们首先引入了FarsInstruct,这是一个旨在专门提升大语言模型对于波斯语——一种重要但在全球范围内代表性不足的语言——的指令遵循能力的综合性指令数据集。FarsInstruct涵盖了广泛的任务类型和数据集,每个数据集均包含从简单到复杂的人工撰写指令,以及来自公共提示池的翻译,确保了丰富的语言和文化代表性。此外,我们提出了Co-CoLA框架,旨在增强LoRA微调模型的多任务适应能力。通过广泛的实验分析,我们的研究展示了FarsInstruct数据集与Co-CoLA框架训练相结合,在提升大语言模型于波斯语语境下性能方面的有效性。截至本文撰写时,FarsInstruct包含跨越21个不同数据集的197个模板,我们计划持续更新,以增强其适用性。