We study grokking, the onset of generalization long after overfitting, in a classical ridge regression setting. We prove end-to-end grokking results for learning over-parameterized linear regression models using gradient descent with weight decay. Specifically, we prove that the following stages occur: (i) the model overfits the training data early during training; (ii) poor generalization persists long after overfitting has manifested; and (iii) the generalization error eventually becomes arbitrarily small. Moreover, we show, both theoretically and empirically, that grokking can be amplified or eliminated in a principled manner through proper hyperparameter tuning. To the best of our knowledge, these are the first rigorous quantitative bounds on the generalization delay (which we refer to as the "grokking time") in terms of training hyperparameters. Lastly, going beyond the linear setting, we empirically demonstrate that our quantitative bounds also capture the behavior of grokking on non-linear neural networks. Our results suggest that grokking is not an inherent failure mode of deep learning, but rather a consequence of specific training conditions, and thus does not require fundamental changes to the model architecture or learning algorithm to avoid.
翻译:我们在经典岭回归框架下研究“顿悟”现象——即模型在过拟合发生很久后才开始泛化的过程。我们证明了使用权重衰减梯度下降学习过参数化线性回归模型时端到端的顿悟现象。具体而言,我们证明了以下阶段的出现:(i) 模型在训练早期过拟合训练数据;(ii) 在过拟合显现后很长一段时间内泛化能力持续低下;(iii) 最终泛化误差变得任意小。此外,我们通过理论与实验表明,通过适当的超参数调优,可以以原则性方式放大或消除顿悟现象。据我们所知,这是首次针对泛化延迟(我们称之为“顿悟时间”)与训练超参数关系提出的严格定量边界。最后,超越线性设定,我们通过实验证明我们的定量边界同样能捕捉非线性神经网络中的顿悟行为。我们的研究结果表明,顿悟并非深度学习的固有缺陷模式,而是特定训练条件导致的结果,因此无需通过改变模型架构或学习算法来规避该现象。