In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using challenging Linguistic Olympiad puzzles, we evaluate (i) capabilities for in-context identification and generalisation of linguistic patterns in very low-resource or extinct languages, and (ii) abilities to follow complex task instructions. The LingOly benchmark covers more than 90 mostly low-resource languages, minimising issues of data contamination, and contains 1,133 problems across 6 formats and 5 levels of human difficulty. We assess performance with both direct accuracy and comparison to a no-context baseline to penalise memorisation. Scores from 11 state-of-the-art LLMs demonstrate the benchmark to be challenging, and models perform poorly on the higher difficulty problems. On harder problems, even the top model only achieved 35.3% accuracy, 21.7% improvement over the no-context baseline. Large closed models typically outperform open models, and in general, the higher resource the language, the better the scores. These results indicate, in absence of memorisation, true multi-step out-of-domain reasoning remains a challenge for current language models.
翻译:本文提出了LingOly基准,一个用于评估大语言模型高级推理能力的新型基准。通过使用具有挑战性的语言学奥林匹克谜题,我们评估了:(i) 在极低资源或消亡语言中,对语言模式进行上下文识别与泛化的能力;(ii) 遵循复杂任务指令的能力。该基准涵盖90余种以低资源语言为主的语种,最大程度降低了数据污染问题,包含1,133个问题,覆盖6种格式和5个人类难度等级。我们通过直接准确率以及与无上下文基线的对比来评估性能,以惩罚记忆化行为。对11个最先进大语言模型的测试结果表明该基准极具挑战性,模型在较高难度问题上表现欠佳。即便在较难问题中,最优模型也仅达到35.3%的准确率,较无上下文基线提升21.7%。大型闭源模型通常优于开源模型,且整体上语言资源越丰富,模型得分越高。这些结果表明,在排除记忆化因素后,真正的多步域外推理仍是当前语言模型面临的挑战。