The advancement of artificial intelligence toward agentic science is currently bottlenecked by the challenge of ultra-long-horizon autonomy, the ability to sustain strategic coherence and iterative correction over experimental cycles spanning days or weeks. While Large Language Models (LLMs) have demonstrated prowess in short-horizon reasoning, they are easily overwhelmed by execution details in the high-dimensional, delayed-feedback environments of real-world research, failing to consolidate sparse feedback into coherent long-term guidance. Here, we present ML-Master 2.0, an autonomous agent that masters ultra-long-horizon machine learning engineering (MLE) which is a representative microcosm of scientific discovery. By reframing context management as a process of cognitive accumulation, our approach introduces Hierarchical Cognitive Caching (HCC), a multi-tiered architecture inspired by computer systems that enables the structural differentiation of experience over time. By dynamically distilling transient execution traces into stable knowledge and cross-task wisdom, HCC allows agents to decouple immediate execution from long-term experimental strategy, effectively overcoming the scaling limits of static context windows. In evaluations on OpenAI's MLE-Bench under 24-hour budgets, ML-Master 2.0 achieves a state-of-the-art medal rate of 56.44%. Our findings demonstrate that ultra-long-horizon autonomy provides a scalable blueprint for AI capable of autonomous exploration beyond human-precedent complexities.
翻译:人工智能向自主科学的发展目前受限于超长视界自主性的挑战,即在跨越数日或数周的实验周期中保持战略连贯性与迭代修正的能力。尽管大语言模型(LLMs)已在短视界推理中展现出强大能力,但在现实研究的高维、延迟反馈环境中,它们极易被执行细节淹没,难以将稀疏反馈整合为连贯的长期指导。本文提出ML-Master 2.0——一个掌握超长视界机器学习工程(MLE)的自主智能体,该领域是科学发现的典型微观缩影。通过将上下文管理重构为认知累积过程,我们的方法引入了分层认知缓存(HCC),这是一种受计算机系统启发的多层架构,能够实现经验随时间的结构化区分。通过将瞬态执行轨迹动态提炼为稳定知识与跨任务智慧,HCC使智能体能够将即时执行与长期实验策略解耦,有效克服静态上下文窗口的扩展限制。在OpenAI的MLE-Bench上进行的24小时预算评估中,ML-Master 2.0实现了56.44%的领先奖牌获得率。我们的研究结果表明,超长视界自主性为人工智能提供了可扩展的蓝图,使其能够超越人类既有认知的复杂度进行自主探索。