The composition of pretraining data is a key determinant of foundation models' performance, but there is no standard guideline for allocating a limited computational budget across different data sources. Most current approaches either rely on extensive experiments with smaller models or dynamic data adjustments that also require proxy models, both of which significantly increase the workflow complexity and computational overhead. In this paper, we introduce Adaptive Data Optimization (ADO), an algorithm that optimizes data distributions in an online fashion, concurrent with model training. Unlike existing techniques, ADO does not require external knowledge, proxy models, or modifications to the model update. Instead, ADO uses per-domain scaling laws to estimate the learning potential of each domain during training and adjusts the data mixture accordingly, making it more scalable and easier to integrate. Experiments demonstrate that ADO can achieve comparable or better performance than prior methods while maintaining computational efficiency across different computation scales, offering a practical solution for dynamically adjusting data distribution without sacrificing flexibility or increasing costs. Beyond its practical benefits, ADO also provides a new perspective on data collection strategies via scaling laws.
翻译:预训练数据的构成是基础模型性能的关键决定因素,但目前尚无标准指南来指导如何在有限计算预算下分配不同数据源。当前大多数方法要么依赖较小模型的广泛实验,要么采用同样需要代理模型的动态数据调整策略,这两种方式都显著增加了工作流程的复杂性和计算开销。本文提出自适应数据优化算法,该算法能够在模型训练过程中以在线方式优化数据分布。与现有技术不同,ADO 无需外部知识、代理模型或修改模型更新过程。相反,ADO 利用各领域的缩放定律在训练期间估计每个领域的学习潜力,并相应调整数据混合比例,从而具备更好的可扩展性和更易集成的特性。实验表明,ADO 在不同计算规模下均能保持计算效率,同时取得与现有方法相当或更优的性能,为动态调整数据分布提供了不牺牲灵活性且不增加成本的实用解决方案。除实际优势外,ADO 还通过缩放定律为数据收集策略提供了新的研究视角。