While large language models are trained on massive datasets, this data is heavily skewed towards English. Does their impressive performance reflect genuine ability or just this data advantage? To find out, we tested them in a setting where they could not rely on data abundance: low-resource languages. Building on prior work Agarwal et al. (2025) that used Next Sentence Prediction (NSP) as a test, we created a large-scale benchmark with 10,000 questions each for English (a high-resource language), Swahili (medium-resource), and Hausa (low-resource). We then tested several top models, including GPT-4 Turbo, Gemini 1.5 Flash, and LLaMA 3 70B, to see how their performance holds up. The results painted a clear picture of how levels of language resources impact outcomes. While all models excelled in English, their accuracy dropped in Swahili and fell sharply in Hausa, with LLaMA 3 struggling the most. The story became even more interesting when we introduced Chain-of-Thought (CoT) prompting. For the struggling LLaMA 3, CoT acted as a helpful guide, significantly boosting its accuracy. However, for the more capable GPT-4 and Gemini, the same technique often backfired, leading to a kind of "overthinking" that hurt their results in the cross-lingual context. This reveals that Chain-of-Thought is not a universal solution; its effectiveness depends heavily on the model's baseline capability and the specific context of the task. Our framework pinpoints LLM weaknesses, highlights when CoT helps or hinders cross-lingual NSP performance, and factors influencing their decisions.
翻译:尽管大语言模型基于海量数据集进行训练,但这些数据严重偏向英语。其卓越表现究竟反映了真实能力,还是仅仅源于数据优势?为探究这一问题,我们在其无法依赖数据丰度的场景下进行测试:低资源语言。基于Agarwal等人(2025)先前使用下一句预测作为测试方法的研究,我们构建了一个大规模基准数据集,包含英语(高资源语言)、斯瓦希里语(中资源)和豪萨语(低资源)各10,000个问题。随后测试了包括GPT-4 Turbo、Gemini 1.5 Flash和LLaMA 3 70B在内的多个顶尖模型,以评估其性能表现。结果清晰揭示了语言资源水平对模型表现的影响:所有模型在英语任务中表现优异,但在斯瓦希里语中准确率下降,在豪萨语中急剧下滑,其中LLaMA 3表现最为困难。当我们引入思维链提示技术时,情况变得更加有趣:对于表现欠佳的LLaMA 3,思维链起到了引导作用,显著提升了其准确率;然而对于能力更强的GPT-4和Gemini,相同技术却常产生反效果,导致其在跨语言语境中出现‘过度思考’而损害结果。这表明思维链并非通用解决方案,其有效性高度依赖于模型的基线能力及任务具体语境。本研究框架精准定位了大语言模型的弱点,明确了思维链在何时促进或阻碍跨语言下一句预测性能,并揭示了影响其决策的关键因素。