Large Language Models (LLMs) have significantly advanced the field of natural language processing, enhancing capabilities in both language understanding and generation across diverse domains. However, developing LLMs for Arabic presents unique challenges. This paper explores these challenges by focusing on critical aspects such as data curation, tokenizer design, and evaluation. We detail our approach to the collection and filtration of Arabic pre-training datasets, assess the impact of various tokenizer designs on model performance, and examine the limitations of existing Arabic evaluation frameworks, for which we propose a systematic corrective methodology. To promote transparency and facilitate collaborative development, we share our data and methodologies, contributing to the advancement of language modeling, particularly for the Arabic language.
翻译:大语言模型(LLMs)显著推动了自然语言处理领域的发展,提升了多种场景下的语言理解与生成能力。然而,开发面向阿拉伯语的大语言模型面临着独特的挑战。本文通过聚焦数据整理、分词器设计和评估等关键环节,深入探讨了这些挑战。我们详细阐述了阿拉伯语预训练数据集的收集与过滤方法,评估了不同分词器设计对模型性能的影响,并分析了现有阿拉伯语评估框架的局限性,为此我们提出了一套系统性的修正方法。为促进透明度并推动协作开发,我们公开了相关数据与方法,以助力语言建模技术的进步,特别是针对阿拉伯语的研究。