As large language models (LLMs) advance their mathematical capabilities toward the IMO level, the scarcity of challenging, high-quality problems for training and evaluation has become a significant bottleneck. Simultaneously, recent code agents have demonstrated sophisticated skills in agentic coding and reasoning, suggesting that code execution can serve as a scalable environment for mathematical experimentation. In this paper, we investigate the potential of code agents to autonomously evolve existing math problems into more complex variations. We introduce a multi-agent framework designed to perform problem evolution while validating the solvability and increased difficulty of the generated problems. Our experiments demonstrate that, given sufficient test-time exploration, code agents can synthesize new, solvable problems that are structurally distinct from and more challenging than the originals. This work provides empirical evidence that code-driven agents can serve as a viable mechanism for synthesizing high-difficulty mathematical reasoning problems within scalable computational environments. Our data is available at https://github.com/TarferSoul/Code2Math.
翻译:随着大语言模型(LLM)的数学能力向国际数学奥林匹克(IMO)水平迈进,用于训练和评估的、具有挑战性的高质量问题稀缺已成为显著瓶颈。与此同时,近期的代码智能体已在自主编码与推理方面展现出复杂技能,这表明代码执行可以作为一个可扩展的数学实验环境。本文中,我们探究了代码智能体自主地将现有数学问题演化成更复杂变体的潜力。我们引入了一个多智能体框架,旨在执行问题演化的同时,验证生成问题的可解性及难度提升。实验表明,在给予充分的测试时探索后,代码智能体能够合成结构上不同于原始问题、难度更高且可解的新问题。这项工作提供了实证证据,表明代码驱动的智能体可以作为一种可行的机制,在可扩展的计算环境中合成高难度数学推理问题。我们的数据可在 https://github.com/TarferSoul/Code2Math 获取。