Arabic Language Models (LMs) are pretrained predominately on Modern Standard Arabic (MSA) and are expected to transfer to its dialects. While MSA as the standard written variety is commonly used in formal settings, people speak and write online in various dialects that are spread across the Arab region. This poses limitations for Arabic LMs, since its dialects vary in their similarity to MSA. In this work we study cross-lingual transfer of Arabic models using probing on 3 Natural Language Processing (NLP) Tasks, and representational similarity. Our results indicate that transfer is possible but disproportionate across dialects, which we find to be partially explained by their geographic proximity. Furthermore, we find evidence for negative interference in models trained to support all Arabic dialects. This questions their degree of similarity, and raises concerns for cross-lingual transfer in Arabic models.
翻译:阿拉伯语言模型(LMs)主要基于现代标准阿拉伯语(MSA)进行预训练,并预期能迁移至其方言变体。尽管作为标准书面语的MSA在正式场合被广泛使用,但人们在阿拉伯地区的线上交流中实际使用着多种方言进行口语和书面表达。由于各阿拉伯方言与MSA的相似度存在差异,这给阿拉伯语言模型带来了局限性。本研究通过三项自然语言处理(NLP)任务的探针实验与表征相似性分析,系统考察了阿拉伯语言模型的跨语言迁移能力。实验结果表明:跨方言迁移虽可实现,但存在不均衡性,我们发现地理邻近性可部分解释这种差异。此外,在训练支持所有阿拉伯方言的模型时,我们发现了负向干扰的证据。这引发了对阿拉伯方言间相似程度的质疑,并对阿拉伯语言模型的跨语言迁移前景提出了警示。