This monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization. We first use quadratic optimization problems to introduce two key families of methods, namely momentum and nested optimization schemes. They coincide in the quadratic case to form the Chebyshev method. We discuss momentum methods in detail, starting with the seminal work of Nesterov and structure convergence proofs using a few master templates, such as that for optimized gradient methods, which provide the key benefit of showing how momentum methods optimize convergence guarantees. We further cover proximal acceleration, at the heart of the Catalyst and Accelerated Hybrid Proximal Extragradient frameworks, using similar algorithmic patterns. Common acceleration techniques rely directly on the knowledge of some of the regularity parameters in the problem at hand. We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates while adapting to unobserved regularity parameters.
翻译:本专著涵盖了凸优化中常用的一系列加速技术的最新进展。我们首先利用二次优化问题引入两个关键方法族,即动量法与嵌套优化方案。在二次情形下,这两类方法统一为切比雪夫方法。我们详细讨论了动量方法,从Nesterov的开创性工作出发,通过若干通用模板(如优化梯度方法的模板)构建收敛性证明,这些模板的关键优势在于揭示了动量方法如何优化收敛性保证。我们进一步使用类似的算法模式,探讨了Catalyst框架与加速混合邻近外梯度框架的核心——邻近加速技术。常见的加速方法直接依赖于对问题中某些正则性参数的先验知识。最后我们讨论重启方案,这是一组通过适应未观测正则性参数来实现近最优收敛速率的简洁技术。