Understanding the geometry of the loss landscape near a minimum is key to explaining the implicit bias of gradient-based methods in non-convex optimization problems such as deep neural network training and deep matrix factorization. A central quantity to characterize this geometry is the maximum eigenvalue of the Hessian of the loss, which measures the sharpness of the landscape. Currently, its precise role has been obfuscated because no exact expressions for this sharpness measure were known in general settings. In this paper, we present the first exact expression for the maximum eigenvalue of the Hessian of the squared-error loss at any minimizer in general overparameterized deep matrix factorization (i.e., deep linear neural network training) problems, resolving an open question posed by Mulayoff & Michaeli (2020). To complement our theory, we empirically investigate an escape phenomenon observed during gradient-based training near a minimum that crucially relies on our exact expression of the sharpness.
翻译:理解损失函数在极小点附近的几何形态,对于解释梯度方法在非凸优化问题(如深度神经网络训练和深度矩阵分解)中的隐式偏好至关重要。描述该几何形态的一个核心量是损失函数海森矩阵的最大特征值,它衡量了损失函数曲面的锐度。目前,由于该锐度度量在一般情形下缺乏精确表达式,其确切作用一直未能明晰。本文首次给出了在一般过参数化深度矩阵分解(即深度线性神经网络训练)问题中,任意极小点处平方误差损失的海森矩阵最大特征值的精确表达式,从而解决了Mulayoff & Michaeli (2020)提出的一个开放性问题。作为理论的补充,我们通过实验研究了在极小点附近基于梯度的训练过程中观察到的逃逸现象,该现象的分析关键依赖于我们给出的锐度精确表达式。