We give a simple local Polyak-Lojasiewicz (PL) criterion that guarantees linear (exponential) convergence of gradient flow and gradient descent to a zero-loss solution of a nonnegative objective. We then verify this criterion for the squared training loss of a feedforward neural network with smooth, strictly increasing activation functions, in a regime that is complementary to the usual over-parameterized analyses: the network width and depth are fixed, while the input data vectors are assumed to be linearly independent (in particular, the ambient input dimension is at least the number of data points). A notable feature of the verification is that it is constructive: it leads to a simple "positive" initialization (zero first-layer weights, strictly positive hidden-layer weights, and sufficiently large output-layer weights) under which gradient descent provably converges to an interpolating global minimizer of the training loss. We also discuss a probabilistic corollary for random initializations, clarify its dependence on the probability of the required initialization event, and provide numerical experiments showing that this theory-guided initialization can substantially accelerate optimization relative to standard random initializations at the same width.
翻译:我们提出了一种简单的局部Polyak-Lojasiewicz(PL)准则,该准则保证了梯度流与梯度下降法能够以线性(指数)速度收敛至非负目标函数的零损失解。随后,我们针对具有光滑严格递增激活函数的前馈神经网络,在其平方训练损失上验证了这一准则。该验证所适用的机制与常见的过参数化分析形成互补:网络宽度与深度固定,而输入数据向量被假设为线性无关(特别地,环境输入维度至少等于数据点数量)。验证过程的一个显著特点是其构造性:它导出了一个简单的“正”初始化方案(首层权重为零,隐藏层权重严格为正,输出层权重充分大),在此方案下梯度下降可证明收敛至训练损失的一个插值全局极小点。我们还讨论了随机初始化的概率推论,阐明了其对所需初始化事件概率的依赖性,并通过数值实验表明,在相同网络宽度下,相较于标准随机初始化,这种理论指导的初始化方案能显著加速优化过程。