The central challenge of AI for Science is not reasoning alone, but the ability to create computational methods in an open-ended scientific world. Existing LLM-based agents rely on static, pre-defined tool libraries, a paradigm that fundamentally fails in scientific domains where tools are sparse, heterogeneous, and intrinsically incomplete. In this paper, we propose Test-Time Tool Evolution (TTE), a new paradigm that enables agents to synthesize, verify, and evolve executable tools during inference. By transforming tools from fixed resources into problem-driven artifacts, TTE overcomes the rigidity and long-tail limitations of static tool libraries. To facilitate rigorous evaluation, we introduce SciEvo, a benchmark comprising 1,590 scientific reasoning tasks supported by 925 automatically evolved tools. Extensive experiments show that TTE achieves state-of-the-art performance in both accuracy and tool efficiency, while enabling effective cross-domain adaptation of computational tools. The code and benchmark have been released at https://github.com/lujiaxuan0520/Test-Time-Tool-Evol.
翻译:科学人工智能的核心挑战不仅在于推理本身,更在于在开放的科学世界中创造计算方法的能力。现有基于大语言模型的智能体依赖于静态的、预定义的工具库,这种范式在科学领域中根本行不通,因为科学领域的工具具有稀疏性、异构性且本质上不完整。本文提出测试时工具演化,这是一种新范式,使智能体能够在推理过程中合成、验证并演化可执行工具。通过将工具从固定资源转变为问题驱动的产物,TTE克服了静态工具库的僵化性和长尾局限性。为促进严谨评估,我们引入了SciEvo基准,该基准包含1,590个科学推理任务,并由925个自动演化的工具提供支持。大量实验表明,TTE在准确性和工具效率方面均达到了最先进的性能,同时实现了计算工具有效的跨领域适应。代码和基准已发布于https://github.com/lujiaxuan0520/Test-Time-Tool-Evol。