As large language models (LLMs) rapidly advance, evaluating their performance is critical. LLMs are trained on multilingual data, but their reasoning abilities are mainly evaluated using English datasets. Hence, robust evaluation frameworks are needed using high-quality non-English datasets, especially low-resource languages (LRLs). This study evaluates eight state-of-the-art (SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language Understanding (MMLU) subset curated with native speakers for linguistic and cultural relevance. Giriama is benchmarked for the first time. Our evaluation shows that OpenAI's o1 model outperforms others across all languages, scoring 92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks. Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both Latvian and Giriama. Our results underscore the need for localized benchmarks and human evaluations in advancing cultural AI contextualization.
翻译:随着大语言模型(LLM)的快速发展,对其性能进行评估至关重要。大语言模型虽基于多语言数据进行训练,但其推理能力主要使用英语数据集进行评估。因此,需要利用高质量的非英语数据集(尤其是低资源语言)建立稳健的评估框架。本研究使用由母语者精心筛选、确保语言与文化相关性的海量多任务语言理解(MMLU)子集,对八种最先进的大语言模型在拉脱维亚语和吉利亚马语上的表现进行了评估。这是首次对吉利亚马语进行基准测试。我们的评估结果显示,OpenAI的o1模型在所有语言中均表现最优,在零样本任务中英语得分92.8%,拉脱维亚语88.8%,吉利亚马语70.8%。Mistral-large(35.6%)和Llama-70B IT(41%)在拉脱维亚语和吉利亚马语上均表现较弱。我们的研究结果强调了在推进文化人工智能情境化进程中,本地化基准测试与人工评估的必要性。