The advancement of artificial intelligence toward agentic science is currently bottlenecked by the challenge of ultra-long-horizon autonomy, the ability to sustain strategic coherence and iterative correction over experimental cycles spanning days or weeks. While Large Language Models (LLMs) have demonstrated prowess in short-horizon reasoning, they are easily overwhelmed by execution details in the high-dimensional, delayed-feedback environments of real-world research, failing to consolidate sparse feedback into coherent long-term guidance. Here, we present ML-Master 2.0, an autonomous agent that masters ultra-long-horizon machine learning engineering (MLE) which is a representative microcosm of scientific discovery. By reframing context management as a process of cognitive accumulation, our approach introduces Hierarchical Cognitive Caching (HCC), a multi-tiered architecture inspired by computer systems that enables the structural differentiation of experience over time. By dynamically distilling transient execution traces into stable knowledge and cross-task wisdom, HCC allows agents to decouple immediate execution from long-term experimental strategy, effectively overcoming the scaling limits of static context windows. In evaluations on OpenAI's MLE-Bench under 24-hour budgets, ML-Master 2.0 achieves a state-of-the-art medal rate of 56.44%. Our findings demonstrate that ultra-long-horizon autonomy provides a scalable blueprint for AI capable of autonomous exploration beyond human-precedent complexities.
翻译:人工智能向智能科学的发展目前受限于超长视野自主性的挑战,即维持数天或数周实验周期内战略连贯性与迭代修正的能力。尽管大语言模型(LLMs)在短视野推理中展现出强大能力,但在现实研究的高维度、延迟反馈环境中,它们极易被执行细节淹没,无法将稀疏反馈整合为连贯的长期指导。本文提出ML-Master 2.0——一个掌握超长视野机器学习工程(MLE)的自主智能体,该领域是科学发现的代表性微观世界。通过将情境管理重构为认知积累过程,我们引入受计算机系统启发的分层认知缓存(HCC)架构,该多层体系实现了经验随时间的结构化区分。通过动态将瞬态执行轨迹提炼为稳定知识与跨任务智慧,HCC使智能体能够将即时执行与长期实验策略解耦,有效克服静态上下文窗口的扩展限制。在OpenAI的MLE-Bench上进行的24小时预算评估中,ML-Master 2.0实现了56.44%的领先奖牌率。我们的研究证明,超长视野自主性为人工智能提供了可扩展的蓝图,使其能够超越人类既有经验的复杂度进行自主探索。