The predominance of English and Latin-based large language models (LLMs) has led to a notable deficit in native Arabic LLMs. This discrepancy is accentuated by the prevalent inclusion of English tokens in existing Arabic models, detracting from their efficacy in processing native Arabic's intricate morphology and syntax. Consequently, there is a theoretical and practical imperative for developing LLMs predominantly focused on Arabic linguistic elements. To address this gap, this paper proposes ArabianGPT, a series of transformer-based models within the ArabianLLM suite designed explicitly for Arabic. These models, including ArabianGPT-0.1B and ArabianGPT-0.3B, vary in size and complexity, aligning with the nuanced linguistic characteristics of Arabic. The AraNizer tokenizer, integral to these models, addresses the unique morphological aspects of Arabic script, ensuring more accurate text processing. Empirical results from fine-tuning the models on tasks like sentiment analysis and summarization demonstrate significant improvements. For sentiment analysis, the fine-tuned ArabianGPT-0.1B model achieved a remarkable accuracy of 95%, a substantial increase from the base model's 56%. Similarly, in summarization tasks, fine-tuned models showed enhanced F1 scores, indicating improved precision and recall in generating concise summaries. Comparative analysis of fine-tuned ArabianGPT models against their base versions across various benchmarks reveals nuanced differences in performance, with fine-tuning positively impacting specific tasks like question answering and summarization. These findings underscore the efficacy of fine-tuning in aligning ArabianGPT models more closely with specific NLP tasks, highlighting the potential of tailored transformer architectures in advancing Arabic NLP.
翻译:摘要:英语及拉丁语系大型语言模型(LLMs)的主导地位导致原生阿拉伯语LLMs显著匮乏。现有阿拉伯语模型中普遍包含英语词元,这一缺陷进一步加剧了问题,削弱了其在处理阿拉伯语复杂形态与句法方面的效能。因此,从理论与实践层面而言,亟需开发以阿拉伯语要素为核心的大语言模型。为填补这一空白,本文提出ArabianGPT系列模型,作为专为阿拉伯语设计的ArabianLLM套件中基于Transformer的模型。这些模型包括ArabianGPT-0.1B与ArabianGPT-0.3B,其规模与复杂度各异,以契合阿拉伯语精细的语音特征。其中集成的AraNizer分词器针对阿拉伯文字独特的形态特征进行优化,确保更精准的文本处理。在情感分析、摘要生成等任务中进行微调后的实证结果表明性能显著提升。情感分析任务中,微调后的ArabianGPT-0.1B模型准确率达到95%,较基础模型的56%大幅提升;摘要生成任务中,微调模型的F1分数亦有所提高,表明生成简洁摘要时的精确率与召回率均获改善。通过对比微调后ArabianGPT模型与基础版本在多项基准测试中的表现,可发现微调对问答、摘要等特定任务的优化效果存在差异化影响。这些发现证实了微调能有效使ArabianGPT模型更精准适配具体自然语言处理任务,凸显了定制化Transformer架构在推动阿拉伯语自然语言处理发展中的潜力。