Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.
翻译:人工智能能否在重要且未解决的数学问题上取得进展?当前的大型语言模型已具备复杂的数学与科学推理能力,但其能否开展新颖的研究仍存在广泛争议且探索不足。我们提出了HorizonMath,这是一个包含计算数学与应用数学8个领域中超过100个以未解决问题为主的基准测试集,并配套开源了用于自动验证的评估框架。我们的基准针对一类发现困难(需要实质性的数学洞察力)但验证计算高效且简单的问题。由于这些问题的解决方案尚未可知,HorizonMath能够避免数据污染的影响,目前最先进的模型在该基准上的得分大多接近0%。现有的研究级基准则依赖于形式化证明验证或人工评审,这两种方法均难以规模化扩展。利用该平台,我们发现GPT 5.4 Pro针对两个问题提出的解决方案优于已发表的最佳结果,可能代表了新颖的贡献(有待专家评审)。我们将HorizonMath作为一项开放性挑战和不断增长的社区资源公开发布,其中未解问题类中正确解决方案的提出,可能构成数学文献中的新成果。