This paper presents a theoretical analysis of linear interpolation as a principled method for stabilizing (large-scale) neural network training. We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear interpolation can help by leveraging the theory of nonexpansive operators. We construct a new optimization scheme called relaxed approximate proximal point (RAPP), which is the first explicit method without anchoring to achieve last iterate convergence rates for $\rho$-comonotone problems while only requiring $\rho > -\tfrac{1}{2L}$. The construction extends to constrained and regularized settings. By replacing the inner optimizer in RAPP we rediscover the family of Lookahead algorithms for which we establish convergence in cohypomonotone problems even when the base optimizer is taken to be gradient descent ascent. The range of cohypomonotone problems in which Lookahead converges is further expanded by exploiting that Lookahead inherits the properties of the base optimizer. We corroborate the results with experiments on generative adversarial networks which demonstrates the benefits of the linear interpolation present in both RAPP and Lookahead.
翻译:本文提出了线性插值作为一种原则性方法的理论分析,用于稳定(大规模)神经网络训练。我们认为优化过程中的不稳定性通常由损失景观的非单调性引起,并展示了线性插值如何通过利用非扩张算子理论来提供帮助。我们构建了一种新的优化方案,称为松弛近似近端点算法(RAPP),这是首个无需锚定即可在仅需条件ρ > −½L 的ρ-共单调问题上实现最后迭代收敛速率的显式方法。该构造可扩展到约束和正则化设置。通过替换RAPP中的内部优化器,我们重新发现了Lookahead算法族,并证明即使基础优化器采用梯度上升下降方法,该算法也能在共单调问题上实现收敛。利用Lookahead继承基础优化器特性的机制,进一步拓展了其收敛的共单调问题范围。我们通过在生成对抗网络上的实验验证了这些结果,证明了RAPP和Lookahead中线性插值的优势。