This study presents EgyBERT, an Arabic language model pretrained on 10.4 GB of Egyptian dialectal texts. We evaluated EgyBERT's performance by comparing it with five other multidialect Arabic language models across 10 evaluation datasets. EgyBERT achieved the highest average F1-score of 84.25% and an accuracy of 87.33%, significantly outperforming all other comparative models, with MARBERTv2 as the second best model achieving an F1-score 83.68% and an accuracy 87.19%. Additionally, we introduce two novel Egyptian dialectal corpora: the Egyptian Tweets Corpus (ETC), containing over 34.33 million tweets (24.89 million sentences) amounting to 2.5 GB of text, and the Egyptian Forums Corpus (EFC), comprising over 44.42 million sentences (7.9 GB of text) collected from various Egyptian online forums. Both corpora are used in pretraining the new model, and they are the largest Egyptian dialectal corpora to date reported in the literature. Furthermore, this is the first study to evaluate the performance of various language models on Egyptian dialect datasets, revealing significant differences in performance that highlight the need for more dialect-specific models. The results confirm the effectiveness of EgyBERT model in processing and analyzing Arabic text expressed in Egyptian dialect, surpassing other language models included in the study. EgyBERT model is publicly available on \url{https://huggingface.co/faisalq/EgyBERT}.
翻译:本研究提出了EgyBERT,这是一个基于10.4 GB埃及方言文本预训练的阿拉伯语语言模型。我们通过将其与另外五个多方言阿拉伯语语言模型在10个评估数据集上进行比较,评估了EgyBERT的性能。EgyBERT取得了最高的平均F1分数84.25%和准确率87.33%,显著优于所有其他对比模型,其中表现次优的模型MARBERTv2的F1分数为83.68%,准确率为87.19%。此外,我们引入了两个新颖的埃及方言语料库:埃及推文语料库(ETC),包含超过3433万条推文(2489万个句子),总计2.5 GB文本;以及埃及论坛语料库(EFC),包含从多个埃及在线论坛收集的超过4442万个句子(7.9 GB文本)。这两个语料库均用于新模型的预训练,并且是文献中迄今报道的最大的埃及方言语料库。此外,这是首次评估各种语言模型在埃及方言数据集上性能的研究,揭示了性能上的显著差异,凸显了对更多方言特定模型的需求。结果证实了EgyBERT模型在处理和分析以埃及方言表达的阿拉伯语文本方面的有效性,超越了研究中包含的其他语言模型。EgyBERT模型已在\url{https://huggingface.co/faisalq/EgyBERT}公开提供。