LLM-based agents for machine learning engineering (MLE) predominantly rely on tree search, a form of gradient-free optimization that uses scalar validation scores to rank candidates. As LLM reasoning capabilities improve, exhaustive enumeration becomes increasingly inefficient compared to directed updates, analogous to how accurate gradients enable efficient descent over random search. We introduce \textsc{Gome}, an MLE agent that operationalizes gradient-based optimization. \textsc{Gome} maps structured diagnostic reasoning to gradient computation, success memory to momentum, and multi-trace execution to distributed optimization. Under a closed-world protocol that isolates architectural effects from external knowledge, \textsc{Gome} achieves a state-of-the-art 35.1\% any-medal rate on MLE-Bench with a restricted 12-hour budget on a single V100 GPU. Scaling experiments across 10 models reveal a critical crossover: with weaker models, tree search retains advantages by compensating for unreliable reasoning through exhaustive exploration; as reasoning capability strengthens, gradient-based optimization progressively outperforms, with the gap widening at frontier-tier models. Given the rapid advancement of reasoning-oriented LLMs, this positions gradient-based optimization as an increasingly favorable paradigm. We release our codebase and GPT-5 traces.
翻译:基于大型语言模型(LLM)的机器学习工程(MLE)智能体主要依赖树搜索,这是一种利用标量验证分数对候选方案进行排序的无梯度优化方法。随着LLM推理能力的提升,相较于定向更新,穷举枚举的效率日益低下,这类似于精确梯度相较于随机搜索能实现高效下降。我们提出了\textsc{Gome},一种实现基于梯度优化的MLE智能体。\textsc{Gome}将结构化诊断推理映射为梯度计算,将成功记忆映射为动量,并将多轨迹执行映射为分布式优化。在一个将架构效应与外部知识隔离的封闭世界协议下,\textsc{Gome}在单块V100 GPU上以受限的12小时预算,在MLE-Bench上实现了35.1%的任意奖牌率,达到最先进水平。在10个模型上的扩展实验揭示了一个关键的交叉现象:对于较弱的模型,树搜索通过穷举探索来弥补不可靠的推理,从而保持优势;随着推理能力的增强,基于梯度的优化逐渐超越,并在前沿级模型上差距进一步扩大。鉴于面向推理的LLM快速发展,这使基于梯度的优化成为一个日益有利的范式。我们已发布代码库及GPT-5轨迹。