This work investigates the effectiveness of schedule-free methods, developed by A. Defazio et al. (NeurIPS 2024), in nonconvex optimization settings, inspired by their remarkable empirical success in training neural networks. Specifically, we show that schedule-free SGD achieves optimal iteration complexity for nonsmooth, nonconvex optimization problems. Our proof begins with the development of a general framework for online-to-nonconvex conversion, which converts a given online learning algorithm into an optimization algorithm for nonconvex losses. Our general framework not only recovers existing conversions but also leads to two novel conversion schemes. Notably, one of these new conversions corresponds directly to schedule-free SGD, allowing us to establish its optimality. Additionally, our analysis provides valuable insights into the parameter choices for schedule-free SGD, addressing a theoretical gap that the convex theory cannot explain.
翻译:本研究受A. Defazio等人(NeurIPS 2024)所开发方法在神经网络训练中显著实证成功的启发,探究了无调度方法在非凸优化场景中的有效性。具体而言,我们证明了无调度SGD在非光滑、非凸优化问题上达到了最优迭代复杂度。我们的证明始于构建一个在线到非凸转换的一般框架,该框架可将给定的在线学习算法转换为适用于非凸损失的优化算法。该通用框架不仅复原了现有转换方案,还衍生出两种新颖的转换机制。值得注意的是,其中一种新转换直接对应无调度SGD,从而确立了其最优性。此外,我们的分析为无调度SGD的参数选择提供了重要见解,弥补了凸优化理论无法解释的理论空白。