Dialectal Arabic (DA) varieties are under-served by language technologies, particularly large language models (LLMs). This trend threatens to exacerbate existing social inequalities and limits language modeling applications, yet the research community lacks operationalized LLM performance measurements in DA. We present a method that comprehensively evaluates LLM fidelity, understanding, quality, and diglossia in modeling DA. We evaluate nine LLMs in eight DA varieties across these four dimensions and provide best practice recommendations. Our evaluation suggests that LLMs do not produce DA as well as they understand it, but does not suggest deterioration in quality when they do. Further analysis suggests that current post-training can degrade DA capabilities, that few-shot examples can overcome this and other LLM deficiencies, and that otherwise no measurable features of input text correlate well with LLM DA performance.
翻译:阿拉伯语方言(DA)在语言技术,特别是大语言模型(LLM)领域,一直处于服务不足的状态。这一趋势可能加剧现有的社会不平等,并限制语言建模的应用,然而研究界目前缺乏针对DA的、可操作的LLM性能评估方法。我们提出了一种方法,能够全面评估LLM在建模DA时的忠实度、理解能力、质量以及双言现象。我们基于这四个维度,在八种DA变体上评估了九个LLM,并提供了最佳实践建议。我们的评估表明,LLM生成DA的能力不如其理解DA的能力,但并未发现它们在生成时质量有所下降。进一步的分析表明,当前的后训练可能会削弱LLM的DA能力,而少量示例可以克服这一问题以及其他LLM的缺陷;此外,输入文本的其他可测量特征与LLM的DA性能之间没有明显的相关性。