The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes.
翻译:在逻辑损失下使用大步长梯度下降(GD)训练神经网络的典型过程通常包含两个不同阶段:第一阶段经验风险呈现振荡,而第二阶段经验风险单调下降。我们在满足近齐次条件的双层网络中研究了这一现象。我们证明,当经验风险低于某个依赖于步长的特定阈值时,第二阶段即开始。此外,我们证明归一化边界在第二阶段近乎单调增长,这揭示了GD在训练非齐次预测器时存在的隐式偏置。若数据集线性可分且激活函数的导数远离零,我们证明平均经验风险会下降,这意味着第一阶段必在有限步内终止。最后,我们通过选取适当的大步长证明,经历这种相位转换的GD比单调降低风险的GD更为高效。我们的分析适用于任意宽度的网络,超越了众所周知的神经正切核与平均场机制。