The concept of Critical Batch Size, as pioneered by OpenAI, has long served as a foundational principle for large-scale pre-training. However, with the paradigm shift towards the Warmup-Stable-Decay (WSD) learning rate scheduler, we observe that the original theoretical framework and its underlying mechanisms fail to align with new pre-training dynamics. To bridge this gap between theory and practice, this paper derives a revised E(S) relationship tailored for WSD scheduler, characterizing the trade-off between training data consumption E and steps S during pre-training. Our theoretical analysis reveals two fundamental properties of WSD-based pre-training: 1) B_min, the minimum batch size threshold required to achieve a target loss, and 2) B_opt, the optimal batch size that maximizes data efficiency by minimizing total tokens. Building upon these properties, we propose a dynamic Batch Size Scheduler. Extensive experiments demonstrate that our revised formula precisely captures the dynamics of large-scale pre-training, and the resulting scheduling strategy significantly enhances both training efficiency and final model quality.
翻译:由OpenAI开创的临界批量大小概念,长期以来一直作为大规模预训练的基础原则。然而,随着学习率调度器范式向Warmup-Stable-Decay(WSD)转变,我们观察到原有的理论框架及其内在机制已无法适应新的预训练动态。为弥合理论与实践之间的差距,本文推导出了一个专为WSD调度器定制的修正版E(S)关系式,用以刻画预训练过程中训练数据消耗量E与训练步数S之间的权衡。我们的理论分析揭示了基于WSD的预训练具有两个基本性质:1) B_min,即达到目标损失所需的最小批量大小阈值;2) B_opt,即通过最小化总token数来最大化数据效率的最优批量大小。基于这些性质,我们提出了一种动态批量大小调度器。大量实验表明,我们的修正公式精确捕捉了大规规模预训练的动力学特性,并且由此产生的调度策略显著提升了训练效率和最终模型质量。