In the current landscape of Large Language Models (LLMs), the curation of large-scale, high-quality training data is a primary driver of model performance. A key lever is the \emph{data recipe}, which comprises a data processing pipeline to transform raw sources into training corpora. Despite the growing use of LLMs to automate individual data processing steps, such as data synthesis and filtering, the overall design of data recipes remains largely manual and labor-intensive, requiring substantial human expertise and iteration. To bridge this gap, we formulate \emph{end-to-end data recipe generation} for LLM adaptation. Given a target benchmark and a pool of available data sources, a model is required to output a complete data recipe that adapts a base LLM to the target task. We present DataChef-32B, which performs online reinforcement learning using a proxy reward that predicts downstream performance for candidate recipes. Across six held-out tasks, DataChef-32B produces practical recipes that reach comparable downstream performance to those curated by human experts. Notably, the recipe from DataChef-32B adapts Qwen3-1.7B-Base to the math domain, achieving 66.7 on AIME'25 and surpassing Qwen3-1.7B. This work sheds new light on automating LLM training and developing self-evolving AI systems.
翻译:在当前大语言模型(LLM)的发展格局中,大规模高质量训练数据的筛选与整理是驱动模型性能提升的主要因素。其中,\emph{数据配方}是一个关键调控手段,它包含将原始数据源转化为训练语料库的数据处理流程。尽管目前越来越多地使用LLM来自动化单个数据处理步骤(如数据合成与过滤),但数据配方的整体设计仍主要依赖人工且劳动密集,需要大量的人类专业知识与迭代调整。为弥补这一差距,我们提出了面向LLM适应的\emph{端到端数据配方生成}框架。给定目标基准测试和可用数据源池,模型需输出完整的数据配方,以将基础LLM适配至目标任务。我们提出了DataChef-32B模型,该模型利用预测候选配方下游性能的代理奖励进行在线强化学习。在六项保留任务中,DataChef-32B生成的实用配方所达到的下游性能可与人类专家精心设计的配方相媲美。值得注意的是,DataChef-32B生成的配方成功将Qwen3-1.7B-Base适配至数学领域,在AIME'25测试中取得66.7分,超越了原版Qwen3-1.7B。这项工作为自动化LLM训练及开发自进化人工智能系统提供了新的思路。