Existing benchmarks for frontier models often test specialized, "PhD-level" knowledge that is difficult for non-experts to grasp. In contrast, we present a benchmark with 594 problems based on the NPR Sunday Puzzle Challenge that requires only general knowledge. Our benchmark is challenging for both humans and models; however correct solutions are easy to verify, and models' mistakes are easy to spot. As LLMs are more widely deployed in society, we believe it is useful to develop benchmarks for frontier models that humans can understand without the need for deep domain expertise. Our work reveals capability gaps that are not evident in existing benchmarks: OpenAI o1 significantly outperforms other reasoning models on our benchmark, despite being on par with other models when tested on benchmarks that test specialized knowledge. Furthermore, our analysis of reasoning outputs uncovers new kinds of failures. DeepSeek R1, for instance, often concedes with "I give up" before providing an answer that it knows is wrong. R1 can also be remarkably "uncertain" in its output and in rare cases, it does not "finish thinking," which suggests the need for techniques to "wrap up" before the context window limit is reached. We also quantify the effectiveness of reasoning longer to identify the point beyond which more reasoning is unlikely to improve accuracy on our benchmark.
翻译:现有前沿模型的基准测试通常考察专业化的"博士级"知识,这对非专家而言难以掌握。相比之下,我们提出了一个包含594个问题的基准测试集,其问题基于NPR周日谜题挑战,仅需常识即可解答。该基准对人类和模型均具挑战性;然而正确答案易于验证,且模型的错误易于识别。随着大型语言模型在社会中更广泛地部署,我们认为开发人类无需深厚领域专业知识即可理解的前沿模型基准具有重要意义。我们的研究揭示了现有基准未能体现的能力差距:OpenAI o1在我们的基准上显著优于其他推理模型,尽管在测试专业知识的基准上与其他模型表现相当。此外,我们对推理输出的分析发现了新型失败模式。例如,DeepSeek R1经常在提供明知错误的答案前以"我放弃"回应。R1的输出也可能表现出显著的"不确定性",在极少数情况下甚至未"完成思考",这表明需要开发在达到上下文窗口限制前"结束思考"的技术。我们还量化了延长推理时间的有效性,以确定超出该点后继续推理不太可能提升基准准确率的临界阈值。