We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle's publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup--OpenAI's o1-preview with AIDE scaffolding--achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code (github.com/openai/mle-bench/) to facilitate future research in understanding the ML engineering capabilities of AI agents.
翻译:我们介绍了MLE-bench,这是一个用于衡量AI代理在机器学习工程任务上表现水平的基准。为此,我们从Kaggle平台精选了75个与机器学习工程相关的竞赛,构建了一套多样化的挑战性任务集,旨在测试训练模型、准备数据集和运行实验等现实世界中的机器学习工程技能。我们利用Kaggle公开的排行榜为每个竞赛建立了人类基线。我们使用开源代理框架在基准上评估了多个前沿语言模型,发现表现最佳的配置——结合OpenAI的o1-preview模型与AIDE框架——在16.9%的竞赛中达到了至少相当于Kaggle铜牌的水平。除了主要结果外,我们还研究了AI代理的多种资源扩展形式以及预训练数据污染的影响。我们开源了基准代码(github.com/openai/mle-bench/),以促进未来在理解AI代理的机器学习工程能力方面的研究。