Africa is home to over one-third of the world's languages, yet remains underrepresented in AI research. We introduce Afri-MCQA, the first Multilingual Cultural Question-Answering benchmark covering 7.5k Q&A pairs across 15 African languages from 12 countries. The benchmark offers parallel English-African language Q&A pairs across text and speech modalities and was entirely created by native speakers. Benchmarking large language models (LLMs) on Afri-MCQA shows that open-weight models perform poorly across evaluated cultures, with near-zero accuracy on open-ended VQA when queried in native language or speech. To evaluate linguistic competence, we include control experiments meant to assess this specific aspect separate from cultural knowledge, and we observe significant performance gaps between native languages and English for both text and speech. These findings underscore the need for speech-first approaches, culturally grounded pretraining, and cross-lingual cultural transfer. To support more inclusive multimodal AI development in African languages, we release our Afri-MCQA under academic license or CC BY-NC 4.0 on HuggingFace (https://huggingface.co/datasets/Atnafu/Afri-MCQA)
翻译:非洲拥有全球超过三分之一的语言,但在人工智能研究中仍代表性不足。我们推出了Afri-MCQA,这是首个覆盖12个国家15种非洲语言、包含7.5千个问答对的多语言文化问答基准。该基准提供了涵盖文本与语音模态的平行英语-非洲语言问答对,且全部由母语者创建。在Afri-MCQA上对大型语言模型(LLMs)进行基准测试表明,开源模型在所有评估的文化场景中表现均不佳,当以母语或语音形式进行开放式视觉问答(VQA)查询时,其准确率接近零。为评估语言能力,我们设计了控制实验以区分文化知识与语言能力,并观察到在文本和语音两种模态下,母语与英语之间均存在显著的性能差距。这些发现凸显了发展语音优先方法、文化基础预训练以及跨语言文化迁移的必要性。为支持更具包容性的非洲语言多模态人工智能发展,我们在HuggingFace平台(https://huggingface.co/datasets/Atnafu/Afri-MCQA)以学术许可或CC BY-NC 4.0协议开源了Afri-MCQA数据集。