Variational inequalities (VIs) are a broad class of optimization problems encompassing machine learning problems ranging from standard convex minimization to more complex scenarios like min-max optimization and computing the equilibria of multi-player games. In convex optimization, strong convexity allows for fast statistical learning rates requiring only $\Theta(1/\epsilon)$ stochastic first-order oracle calls to find an $\epsilon$-optimal solution, rather than the standard $\Theta(1/\epsilon^2)$ calls. In this paper, we explain how one can similarly obtain fast $\Theta(1/\epsilon)$ rates for learning VIs that satisfy strong monotonicity, a generalization of strong convexity. Specifically, we demonstrate that standard stability-based generalization arguments for convex minimization extend directly to VIs when the domain admits a small covering, or when the operator is integrable and suboptimality is measured by potential functions; such as when finding equilibria in multi-player games.
翻译:变分不等式(VIs)是一类广泛的优化问题,涵盖了从标准凸最小化到更复杂场景(如极小极大优化和计算多玩家博弈均衡)的机器学习问题。在凸优化中,强凸性允许实现快速的统计学习速率,仅需 $\Theta(1/\epsilon)$ 次随机一阶预言机调用即可找到 $\epsilon$-最优解,而非标准的 $\Theta(1/\epsilon^2)$ 次调用。本文阐述了如何对满足强单调性(强凸性的一种推广)的变分不等式学习,类似地获得快速的 $\Theta(1/\epsilon)$ 速率。具体而言,我们证明了当定义域允许小覆盖,或者当算子可积且次优性通过势函数(例如在多玩家博弈中寻找均衡时)度量时,凸最小化中基于稳定性的标准泛化论证可直接推广至变分不等式。