While momentum-based optimization algorithms are commonly used in the notoriously non-convex optimization problems of deep learning, their analysis has historically been restricted to the convex and strongly convex setting. In this article, we partially close this gap between theory and practice and demonstrate that virtually identical guarantees can be obtained in optimization problems with a `benign' non-convexity. We show that these weaker geometric assumptions are well justified in overparametrized deep learning, at least locally. Variations of this result are obtained for a continuous time model of Nesterov's accelerated gradient descent algorithm (NAG), the classical discrete time version of NAG, and versions of NAG with stochastic gradient estimates with purely additive noise and with noise that exhibits both additive and multiplicative scaling.
翻译:尽管基于动量的优化算法在深度学习中臭名昭著地非凸的优化问题中被广泛使用,但对其分析历来局限于凸和强凸的设置。本文中,我们部分地弥合了理论与实践之间的这一差距,并证明在具有“良性”非凸性的优化问题中,几乎可以取得完全相同的保证。我们证明,至少在局部范围内,这些较弱的几何假设在过参数化的深度学习中是合理的。我们针对Nesterov加速梯度下降算法(NAG)的连续时间模型、NAG的经典离散时间版本,以及具有纯加性噪声和同时具有加性与乘性缩放噪声的随机梯度估计的NAG变体,获得了这一结果的不同形式。