Recent findings by Cohen et al., 2021, demonstrate that when training neural networks with full-batch gradient descent at a step size of $\eta$, the sharpness--defined as the largest eigenvalue of the full batch Hessian--consistently stabilizes at $2/\eta$. These results have significant implications for convergence and generalization. Unfortunately, this was observed not to be the case for mini-batch stochastic gradient descent (SGD), thus limiting the broader applicability of these findings. We show that SGD trains in a different regime we call Edge of Stochastic Stability. In this regime, what hovers at $2/\eta$ is, instead, the average over the batches of the largest eigenvalue of the Hessian of the mini batch (MiniBS) loss--which is always bigger than the sharpness. This implies that the sharpness is generally lower when training with smaller batches or bigger learning rate, providing a basis for the observed implicit regularization effect of SGD towards flatter minima and a number of well established empirical phenomena. Additionally, we quantify the gap between the MiniBS and the sharpness, further characterizing this distinct training regime.
翻译:Cohen等人(2021年)的最新研究结果表明,当使用全批次梯度下降以步长$\eta$训练神经网络时,锐度(定义为全批次Hessian矩阵的最大特征值)会稳定在$2/\eta$。这些发现对收敛性和泛化性具有重要影响。然而,在小批量随机梯度下降(SGD)中并未观察到这一现象,从而限制了这些发现的更广泛适用性。我们证明SGD在一个我们称之为随机稳定性边界的不同机制中进行训练。在此机制中,稳定在$2/\eta$的并非锐度,而是小批量损失函数(MiniBS)的Hessian矩阵最大特征值在各批次上的平均值——该值始终大于锐度。这意味着在使用较小批次或较大学习率训练时,锐度通常较低,这为观察到的SGD向更平坦极小值隐式正则化效应以及若干已充分验证的经验现象提供了理论依据。此外,我们量化了MiniBS与锐度之间的差距,进一步刻画了这一独特的训练机制。