We discover restrained numerical instabilities in current training practices of deep networks with stochastic gradient descent (SGD), and its variants. We show numerical error (on the order of the smallest floating point bit and thus the most extreme or limiting numerical perturbations induced from floating point arithmetic in training deep nets can be amplified significantly and result in significant test accuracy variance (sensitivities), comparable to the test accuracy variance due to stochasticity in SGD. We show how this is likely traced to instabilities of the optimization dynamics that are restrained, i.e., localized over iterations and regions of the weight tensor space. We do this by presenting a theoretical framework using numerical analysis of partial differential equations (PDE), and analyzing the gradient descent PDE of convolutional neural networks (CNNs). We show that it is stable only under certain conditions on the learning rate and weight decay. We show that rather than blowing up when the conditions are violated, the instability can be restrained. We show this is a consequence of the non-linear PDE associated with the gradient descent of the CNN, whose local linearization changes when over-driving the step size of the discretization, resulting in a stabilizing effect. We link restrained instabilities to the recently discovered Edge of Stability (EoS) phenomena, in which the stable step size predicted by classical theory is exceeded while continuing to optimize the loss and still converging. Because restrained instabilities occur at the EoS, our theory provides new insights and predictions about the EoS, in particular, the role of regularization and the dependence on the network complexity.
翻译:我们发现在当前使用随机梯度下降(SGD)及其变体训练深度网络的实践中存在受抑制的数值不稳定性。研究表明,数值误差(达到最小浮点位数量级,即深度网络训练中浮点运算引起的最极端或极限数值扰动)可被显著放大,并导致显著的测试准确率方差(敏感性),其程度与SGD随机性引起的测试准确率方差相当。我们通过偏微分方程(PDE)数值分析的理论框架,分析卷积神经网络(CNN)的梯度下降PDE,论证该现象可追溯至受抑制的优化动态不稳定性——即这些不稳定性在迭代次数和权重张量空间区域上呈现局部化特征。研究证明,该PDE仅在学习率和权重衰减满足特定条件时保持稳定。当条件被违反时,不稳定性并非无限放大而是受到抑制。我们揭示这是CNN梯度下降对应的非线性PDE的必然结果:当过度驱动离散化步长时,其局部线性化特性发生改变,从而产生稳定化效应。我们将受抑制的不稳定性与近期发现的稳定性边缘(EoS)现象相关联——该现象中优化过程在超过经典理论预测的稳定步长后仍能持续降低损失并收敛。由于受抑制的不稳定性出现在EoS区域,我们的理论为理解EoS现象提供了新的视角与预测,特别是关于正则化作用与网络复杂度依赖关系的阐释。