Building upon recent works on linesearch-free adaptive proximal gradient methods, this paper proposes adaPG$^{q,r}$, a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds. Different choices of the parameters $q$ and $r$ are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations. In an attempt to better understand the underlying theory, its convergence is established in a more general setting that allows for time-varying parameters. Finally, an adaptive alternating minimization algorithm is presented by exploring the dual setting. This algorithm not only incorporates additional adaptivity, but also expands its applicability beyond standard strongly convex settings.
翻译:基于近期无需线搜索的自适应近端梯度方法的研究,本文提出adaPG$^{q,r}$框架,该框架通过提供更大的步长策略和改进的下界,统一并扩展了现有结果。讨论了参数$q$和$r$的不同选择,并通过数值模拟验证了所提方法的有效性。为深入理解其理论,本文在允许时变参数的更一般设定下证明了其收敛性。最后,通过探索对偶设定,提出一种自适应交替最小化算法。该算法不仅引入了额外的自适应性,还将其适用范围扩展至标准强凸设定之外。