We present a novel benchmark designed to rigorously evaluate the capabilities of large language models (LLMs) in mathematical reasoning and algorithmic code synthesis tasks. The benchmark comprises integer sequence generation tasks sourced from the Online Encyclopedia of Integer Sequences (OEIS), testing LLMs' abilities to accurately and efficiently generate Python code to compute these sequences without using lookup tables. Our comprehensive evaluation includes leading models from OpenAI (including the specialized reasoning-focused o-series), Anthropic, Meta, and Google across a carefully selected set of 1000 OEIS sequences categorized as ``easy'' or ``hard.'' Half of these sequences are classical sequences from the early days of OEIS and half were recently added to avoid contamination with the models' training data. To prevent models from exploiting memorized sequence values, we introduce an automated cheating detection mechanism that flags usage of lookup tables, validated by comparison with human expert evaluations. Experimental results demonstrate that reasoning-specialized models (o3, o3-mini, o4-mini from OpenAI, and Gemini 2.5-pro from Google) achieve substantial improvements in accuracy over non-reasoning models, especially on more complex tasks. However, overall model performance on the hard sequences is poor, highlighting persistent challenges in algorithmic reasoning. Our benchmark provides important insights into the strengths and limitations of state-of-the-art LLMs, particularly emphasizing the necessity for further advancements to reliably solve complex mathematical reasoning tasks algorithmically.
翻译:我们提出了一种新颖的基准测试,旨在严格评估大语言模型在数学推理和算法代码合成任务中的能力。该基准测试包含来自在线整数序列百科全书(OEIS)的整数序列生成任务,测试大语言模型在不使用查找表的情况下,准确高效地生成计算这些序列的Python代码的能力。我们的综合评估涵盖了来自OpenAI(包括专注于推理的o系列专门模型)、Anthropic、Meta和Google的领先模型,测试集为精心挑选的1000个被归类为“简单”或“困难”的OEIS序列。其中一半序列来自OEIS早期的经典序列,另一半是近期新增的序列,以避免与模型的训练数据产生污染。为防止模型利用记忆的序列值,我们引入了一种自动作弊检测机制,用于标记查找表的使用,并通过与人类专家评估进行比较来验证。实验结果表明,专门用于推理的模型(来自OpenAI的o3、o3-mini、o4-mini以及来自Google的Gemini 2.5-pro)在准确性上相较于非推理模型取得了显著提升,尤其是在更复杂的任务上。然而,模型在困难序列上的整体表现较差,突显了算法推理方面持续存在的挑战。我们的基准测试为最先进大语言模型的优势和局限性提供了重要见解,特别强调了需要进一步的技术进步才能可靠地通过算法解决复杂的数学推理任务。