We focus on constrained, $L$-smooth, potentially stochastic and nonconvex-nonconcave min-max problems either satisfying $\rho$-cohypomonotonicity or admitting a solution to the $\rho$-weakly Minty Variational Inequality (MVI), where larger values of the parameter $\rho>0$ correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on which classical min-max algorithms fail. It has been conjectured that first-order methods can tolerate a value of $\rho$ no larger than $\frac{1}{L}$, but existing results in the literature have stagnated at the tighter requirement $\rho < \frac{1}{2L}$. With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$. First main insight for the improvements in the convergence analyses is to harness the recently proposed $\textit{conic nonexpansiveness}$ property of operators. Second, we provide a refined analysis for inexact Halpern iteration that relaxes the required inexactness level to improve some state-of-the-art complexity results even for constrained stochastic convex-concave min-max problems. Third, we analyze a stochastic inexact Krasnosel'ski\u{\i}-Mann iteration with a multilevel Monte Carlo estimator when the assumptions only hold with respect to a solution.
翻译:本文研究具有约束、$L$-光滑、可能带有随机性且非凸-非凹的极小极大问题,此类问题要么满足 $\rho$-共次单调性,要么存在 $\rho$-弱 Minty 变分不等式(MVI)的解,其中参数 $\rho>0$ 的值越大对应非凸程度越高。这些问题类别包含双智能体强化学习、交互主导的极小极大问题以及某些经典极小极大算法失效的合成测试问题。已有猜想认为一阶方法可容忍的 $\rho$ 值不超过 $\frac{1}{L}$,但现有文献结果始终局限于更严格的 $\rho < \frac{1}{2L}$ 要求。通过简洁的论证,我们在 $\rho < \frac{1}{L}$ 条件下获得了共次单调性或弱 MVI 条件下的最优或已知最佳复杂度保证。收敛分析改进的第一个关键洞见在于利用最近提出的算子$\textit{锥非扩张性}$性质。其次,我们对非精确 Halpern 迭代进行了精细化分析,放宽了所需的非精确度要求,从而改进了即使对于约束随机凸-凹极小极大问题的部分前沿复杂度结果。第三,当假设仅对某个解成立时,我们分析了采用多级蒙特卡洛估计器的随机非精确 Krasnosel'ski\u{\i}-Mann 迭代。