Instruction-tuned large language models, such as T0, have demonstrated remarkable capabilities in following instructions across various domains. However, their proficiency remains notably deficient in many low-resource languages. To address this challenge, we introduce FarsInstruct: a comprehensive instruction dataset designed to enhance the instruction-following ability of large language models specifically for the Persian language, a significant yet underrepresented language globally. FarsInstruct encompasses a wide range of task types and datasets, each containing a mix of straightforward to complex manual written instructions, as well as translations from Public Pool of Prompts, ensuring a rich linguistic and cultural representation. Furthermore, we introduce Co-CoLA, a framework designed to enhance the multi-task adaptability of LoRA-tuned models. Through extensive experimental analyses, our study showcases the effectiveness of FarsInstruct dataset coupled with training by Co-CoLA framework, in improving the performance of large language models within the Persian context. As of the current writing, FarsInstruct comprises more than 200 templates across 21 distinct datasets, and we intend to update it consistently, thus augmenting its applicability.
翻译:指令微调的大语言模型(如T0)已在遵循跨领域指令方面展现出卓越能力。然而,其在许多低资源语言上的熟练度仍明显不足。为应对这一挑战,我们推出FarsInstruct:一个旨在专门针对波斯语(一种全球范围内重要但代表性不足的语言)提升大语言模型指令遵循能力的综合性指令数据集。FarsInstruct涵盖广泛的任务类型和数据集,每个数据集均包含从简单到复杂的人工编写指令,以及来自公共提示池的翻译,确保了丰富的语言和文化代表性。此外,我们引入了Co-CoLA框架,旨在增强LoRA微调模型的多任务适应能力。通过广泛的实验分析,我们的研究展示了FarsInstruct数据集结合Co-CoLA框架训练在提升大语言模型于波斯语语境下性能方面的有效性。截至本文撰写时,FarsInstruct包含跨越21个不同数据集的200多个模板,我们将持续更新以增强其适用性。