Large Language Models (LLMs) play a critical role in how humans access information. While their core use relies on comprehending written requests, our understanding of this ability is currently limited, because most benchmarks evaluate LLMs in high-resource languages predominantly spoken by Western, Educated, Industrialised, Rich, and Democratic (WEIRD) communities. The default assumption is that English is the best-performing language for LLMs, while smaller, low-resource languages are linked to less reliable outputs, even in multilingual, state-of-the-art models. To track variation in the comprehension abilities of LLMs, we prompt 3 popular models on a language comprehension task across 12 languages, representing the Indo-European, Afro-Asiatic, Turkic, Sino-Tibetan, and Japonic language families. Our results suggest that the models exhibit remarkable linguistic accuracy across typologically diverse languages, yet they fall behind human baselines in all of them, albeit to different degrees. Contrary to what was expected, English is not the best-performing language, as it was systematically outperformed by several Romance languages, even lower-resource ones. We frame the results by discussing the role of several factors that drive LLM performance, such as tokenization, language distance from Spanish and English, size of training data, and data origin in high- vs. low-resource languages and WEIRD vs. non-WEIRD communities.
翻译:大语言模型(LLMs)在人类获取信息的过程中扮演着关键角色。尽管其核心应用依赖于对书面请求的理解,但目前我们对此能力的认知仍有限,因为大多数基准测试主要针对由西方、受教育、工业化、富裕和民主(WEIRD)社群使用的高资源语言进行评估。默认假设认为英语是LLMs表现最佳的语言,而规模较小、资源匮乏的语言则与输出可靠性较低相关联——即使在多语言、最先进的模型中亦是如此。为探究LLMs理解能力的差异,我们在涵盖印欧语系、亚非语系、突厥语系、汉藏语系和日本语系的12种语言上,对3个主流模型进行了语言理解任务的提示测试。研究结果表明,这些模型在类型学特征各异的语言中均展现出卓越的语言准确性,但在所有语言中均落后于人类基线水平,尽管落后程度存在差异。与预期相反,英语并非表现最佳的语言,其系统性地被数种罗曼语(甚至包括资源较匮乏的语种)超越。我们通过讨论影响LLM性能的若干因素来阐释这些结果,包括分词策略、与西班牙语和英语的语言距离、训练数据规模,以及数据来源(高资源与低资源语言、WEIRD与非WEIRD社群)。