The advancement of artificial intelligence toward agentic science is currently bottlenecked by the challenge of ultra-long-horizon autonomy, the ability to sustain strategic coherence and iterative correction over experimental cycles spanning days or weeks. While Large Language Models (LLMs) have demonstrated prowess in short-horizon reasoning, they are easily overwhelmed by execution details in the high-dimensional, delayed-feedback environments of real-world research, failing to consolidate sparse feedback into coherent long-term guidance. Here, we present ML-Master 2.0, an autonomous agent that masters ultra-long-horizon machine learning engineering (MLE) which is a representative microcosm of scientific discovery. By reframing context management as a process of cognitive accumulation, our approach introduces Hierarchical Cognitive Caching (HCC), a multi-tiered architecture inspired by computer systems that enables the structural differentiation of experience over time. By dynamically distilling transient execution traces into stable knowledge and cross-task wisdom, HCC allows agents to decouple immediate execution from long-term experimental strategy, effectively overcoming the scaling limits of static context windows. In evaluations on OpenAI's MLE-Bench under 24-hour budgets, ML-Master 2.0 achieves a state-of-the-art medal rate of 56.44%. Our findings demonstrate that ultra-long-horizon autonomy provides a scalable blueprint for AI capable of autonomous exploration beyond human-precedent complexities.
翻译:人工智能向代理科学的发展目前正面临超长视野自主性的挑战,即维持跨越数日或数周实验周期的战略连贯性与迭代修正能力。尽管大语言模型(LLMs)在短视野推理中展现出强大能力,但在现实研究的高维、延迟反馈环境中,它们极易被执行细节淹没,难以将稀疏反馈整合为连贯的长期指导。本文提出ML-Master 2.0——一种掌握超长视野机器学习工程(MLE)的自主代理系统,该领域是科学发现的代表性微观世界。通过将上下文管理重构为认知积累过程,我们提出了分层认知缓存(HCC),这是一种受计算机系统启发的多层架构,能够实现经验随时间推移的结构化区分。通过动态地将瞬时执行轨迹提炼为稳定知识与跨任务智慧,HCC使代理能够将即时执行与长期实验策略解耦,有效克服静态上下文窗口的扩展限制。在OpenAI的MLE-Bench上进行的24小时预算评估中,ML-Master 2.0实现了56.44%的先进奖牌率。我们的研究结果表明,超长视野自主性为人工智能提供了可扩展的蓝图,使其能够超越人类既有认知的复杂度进行自主探索。