We introduce Atlas-Chat, the first-ever collection of LLMs specifically developed for dialectal Arabic. Focusing on Moroccan Arabic, also known as Darija, we construct our instruction dataset by consolidating existing Darija language resources, creating novel datasets both manually and synthetically, and translating English instructions with stringent quality control. Atlas-Chat-2B, 9B, and 27B models, fine-tuned on the dataset, exhibit superior ability in following Darija instructions and performing standard NLP tasks. Notably, our models outperform both state-of-the-art and Arabic-specialized LLMs like LLaMa, Jais, and AceGPT, e.g., our 9B model gains a 13% performance boost over a larger 13B model on DarijaMMLU, in our newly introduced evaluation suite for Darija covering both discriminative and generative tasks. Furthermore, we perform an experimental analysis of various fine-tuning strategies and base model choices to determine optimal configurations. All our resources are publicly accessible, and we believe our work offers comprehensive design methodologies of instruction-tuning for low-resource languages, which are often neglected in favor of data-rich languages by contemporary LLMs.
翻译:我们介绍了Atlas-Chat,这是首个专门为方言阿拉伯语开发的大语言模型集合。我们聚焦于摩洛哥阿拉伯语(亦称Darija),通过整合现有的Darija语言资源、手动与合成创建新数据集,并在严格质量控制下翻译英文指令,构建了我们的指令数据集。基于该数据集微调的Atlas-Chat-2B、9B和27B模型,在遵循Darija指令和执行标准自然语言处理任务方面展现出卓越能力。值得注意的是,我们的模型在性能上超越了包括LLaMa、Jais和AceGPT在内的最先进及阿拉伯语专用大语言模型。例如,在我们新引入的涵盖判别式与生成式任务的Darija评估套件DarijaMMLU上,我们的9B模型相比一个更大的13B模型获得了13%的性能提升。此外,我们对多种微调策略和基础模型选择进行了实验分析,以确定最优配置。我们所有的资源均已公开,我们相信这项工作为低资源语言的指令微调提供了全面的设计方法论,而这类语言在当代大语言模型的发展中常因数据丰富的语言而被忽视。