Large language models have demonstrated remarkable capabilities across various tasks, primarily attributed to the utilization of diversely sourced data. However, the impact of pretraining data composition on model performance remains poorly understood. This paper introduces $\textbf{BiMix}$, a novel bivariate data mixing law that models the joint scaling behavior of domain proportions and data volume in LLM pretraining. $\textbf{BiMix}$ provides a systematic framework for understanding and optimizing data mixtures across diverse domains. Through extensive experiments on two large-scale datasets, we demonstrate $\textbf{BiMix}$'s high accuracy in loss extrapolation (mean relative error < 0.2%) and its generalization to unseen mixtures (R${}^{2}$ > 0.97). Optimization of domain proportions yields superior model performance compared to existing methods. Furthermore, we establish entropy-based measures as efficient proxies for data mixing, offering a computationally lightweight strategy. Our work contributes both theoretical insights into data mixing dynamics and practical tools for enhancing LLM training efficiency, paving the way for more effective scaling strategies in language model development.
翻译:大型语言模型在各种任务中展现出卓越能力,这主要归功于对多样化来源数据的利用。然而,预训练数据构成对模型性能的影响仍不甚明晰。本文提出 $\textbf{BiMix}$,一种新颖的双变量数据混合法则,用于建模语言模型预训练中领域比例与数据量的联合缩放行为。$\textbf{BiMix}$ 为理解和优化跨不同领域的数据混合提供了一个系统性框架。通过在两个大规模数据集上的广泛实验,我们证明了 $\textbf{BiMix}$ 在损失外推方面的高精度(平均相对误差 < 0.2%)及其对未见混合的泛化能力(R${}^{2}$ > 0.97)。与现有方法相比,领域比例的优化能带来更优的模型性能。此外,我们建立了基于熵的度量作为数据混合的高效代理指标,提供了一种计算轻量的策略。我们的工作既为数据混合动态提供了理论洞见,也为提升语言模型训练效率提供了实用工具,为语言模型开发中更有效的缩放策略铺平了道路。