Finding approximate stationary points, i.e., points where the gradient is approximately zero, of non-convex but smooth objective functions $f$ over unrestricted $d$-dimensional domains is one of the most fundamental problems in classical non-convex optimization. Nevertheless, the computational and query complexity of this problem are still not well understood when the dimension $d$ of the problem is independent of the approximation error. In this paper, we show the following computational and query complexity results: 1. The problem of finding approximate stationary points over unrestricted domains is PLS-complete. 2. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-approximate stationary points that requires at most $O(1/\varepsilon)$ value queries to the objective function. 3. We show that any algorithm needs at least $\Omega(1/\varepsilon)$ queries to the objective function and/or its gradient to find $\varepsilon$-approximate stationary points when $d=2$. Combined with the above, this characterizes the query complexity of this problem to be $\Theta(1/\varepsilon)$. 4. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-KKT points in constrained optimization problems that requires at most $O(1/\sqrt{\varepsilon})$ value queries to the objective function. This closes the gap between the works of Bubeck and Mikulincer [2020] and Vavasis [1993] and characterizes the query complexity of this problem to be $\Theta(1/\sqrt{\varepsilon})$. 5. Combining our results with the recent result of Fearnley et al. [2022], we show that finding approximate KKT points in constrained optimization is reducible to finding approximate stationary points in unconstrained optimization but the converse is impossible.
翻译:在无约束的d维域上寻找非凸但光滑目标函数$f$的近似驻点(即梯度近似为零的点)是经典非凸优化中最基本的问题之一。然而,当问题的维度$d$与近似误差无关时,该问题的计算复杂性和查询复杂性仍未得到充分理解。本文展示了以下计算复杂性和查询复杂性结果:1. 在无约束域上寻找近似驻点的问题是PLS完全的。2. 对于$d = 2$,我们提出了一种零阶算法用于寻找$\varepsilon$-近似驻点,该算法最多需要对目标函数进行$O(1/\varepsilon)$次值查询。3. 我们证明当$d=2$时,任何算法至少需要$\Omega(1/\varepsilon)$次对目标函数和/或其梯度的查询才能找到$\varepsilon$-近似驻点。结合上述结果,这将该问题的查询复杂性刻画为$\Theta(1/\varepsilon)$。4. 对于$d = 2$,我们提出了一种零阶算法用于寻找约束优化问题中的$\varepsilon$-KKT点,该算法最多需要对目标函数进行$O(1/\sqrt{\varepsilon})$次值查询。这弥补了Bubeck和Mikulincer [2020]与Vavasis [1993]工作之间的空白,并将该问题的查询复杂性刻画为$\Theta(1/\sqrt{\varepsilon})$。5. 将我们的结果与Fearnley等人[2022]的最新结果相结合,我们证明约束优化中寻找近似KKT点可归约为无约束优化中寻找近似驻点,但反之则不可能。