Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.
翻译:基准测试是追踪大型语言模型(LLM)能力快速进展的重要工具。然而,基准测试的难度未能同步提升:当前LLM在MMLU等流行基准上已达到超过90%的准确率,这限制了对前沿LLM能力的有序评估。为此,我们推出"人类终极考试"(HLE)——一个处于人类知识前沿的多模态基准,旨在成为涵盖广泛学科领域的终极闭卷式学术基准。HLE包含数学、人文学科与自然科学等数十个学科的2500道试题,由全球各领域专家开发,采用适合自动评分的单项选择题与简答题形式。每道题目均具有明确且可验证的标准答案,但无法通过互联网检索快速获得解答。当前最先进的LLM在HLE上表现出较低的准确率与校准度,这揭示了现有LLM在闭卷式学术问题上与人类专家前沿水平之间存在显著差距。为基于对模型能力的清晰认知推动研究与政策制定,我们在https://lastexam.ai公开发布HLE基准。