On a variety of tasks, the performance of neural networks predictably improves with training time, dataset size and model size across many orders of magnitude. This phenomenon is known as a neural scaling law. Of fundamental importance is the compute-optimal scaling law, which reports the performance as a function of units of compute when choosing model sizes optimally. We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization. This reproduces many observations about neural scaling laws. First, our model makes a prediction about why the scaling of performance with training time and with model size have different power law exponents. Consequently, the theory predicts an asymmetric compute-optimal scaling rule where the number of training steps are increased faster than model parameters, consistent with recent empirical observations. Second, it has been observed that early in training, networks converge to their infinite-width dynamics at a rate $1/\textit{width}$ but at late time exhibit a rate $\textit{width}^{-c}$, where $c$ depends on the structure of the architecture and task. We show that our model exhibits this behavior. Lastly, our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
翻译:在多种任务中,神经网络的性能可预测地随着训练时间、数据集规模和模型规模跨越多个数量级而提升。这一现象被称为神经缩放律。其中具有根本重要性的是计算最优缩放律,它描述了在最优选择模型规模时,性能作为计算单元函数的变化规律。我们分析了一个用梯度下降训练的可解随机特征模型,将其作为网络训练与泛化的可解模型。该模型复现了关于神经缩放律的诸多观测结果。首先,我们的模型对性能随训练时间的缩放与随模型规模的缩放具有不同幂律指数的原因作出了预测。因此,该理论预测了一种非对称的计算最优缩放规则,即训练步数的增加速度应快于模型参数的增长速度,这与近期的实证观测结果一致。其次,已有观测表明:在训练早期,网络以 $1/\textit{宽度}$ 的速率收敛到其无限宽度动力学;而在训练后期,则呈现 $\textit{宽度}^{-c}$ 的收敛速率,其中 $c$ 取决于架构结构与任务特性。我们证明了我们的模型展现出此种行为。最后,我们的理论揭示了训练损失与测试损失之间的差距如何因数据的重复使用而随时间逐渐累积。