This paper presents a novel benchmark where the large language model (LLM) must write code that computes integer sequences from the Online Encyclopedia of Integer Sequences (OEIS), a widely-used resource for mathematical sequences. The benchmark is designed to evaluate both the correctness of the generated code and its computational efficiency. Our benchmark reveals that the o1 series of models outperform other frontier models from OpenAI, Anthropic, Meta, and Google in accuracy and cheating rates across both easy and hard integer sequences. In order to ensure models do not exploit memorized sequence values, we introduce an automated cheating detection mechanism that flags the use of lookup tables and validated this automation against human cheating evaluations. This benchmark provides a meaningful challenge for current LLMs, offering insights into their mathematical reasoning and code writing capabilities, which can guide future research directions and model development in mathematical reasoning and code synthesis.
翻译:本文提出了一种新颖的基准测试方法,要求大语言模型编写代码以生成来自在线整数序列百科全书(OEIS)的整数序列。该基准测试旨在评估生成代码的正确性及其计算效率。我们的基准测试表明,在简单和困难整数序列的准确性和作弊率方面,o1系列模型的表现优于来自OpenAI、Anthropic、Meta和Google的其他前沿模型。为确保模型不利用记忆的序列值,我们引入了一种自动作弊检测机制,该机制可标记查找表的使用,并通过人工作弊评估验证了该自动化方法的有效性。该基准测试为当前大语言模型提供了一个有意义的挑战,有助于深入理解其数学推理和代码编写能力,从而为数学推理和代码合成领域的未来研究方向和模型开发提供指导。