Zeroth-order optimization (ZO) has been a powerful framework for solving black-box problems, which estimates gradients using zeroth-order data to update variables iteratively. The practical applicability of ZO critically depends on the efficiency of single-step gradient estimation and the overall query complexity. However, existing ZO algorithms cannot achieve efficiency on both simultaneously. In this work, we consider a general constrained optimization model with black-box objective and constraint functions. To solve it, we propose novel algorithms that can achieve the state-of-the-art overall query complexity bound of $\mathcal{O}(d/\epsilon^4)$ to find an $\epsilon$-stationary solution ($d$ is the dimension of variable space), while reducing the queries for estimating a single-step gradient from $\mathcal{O}(d)$ to $\mathcal{O}(1)$. Specifically, we integrate block updates with gradient descent ascent and a block gradient estimator, which leads to two algorithms, ZOB-GDA and ZOB-SGDA, respectively. Instead of constructing full gradients, they estimate only partial gradients along random blocks of dimensions, where the adjustable block sizes enable high single-step efficiency without sacrificing convergence guarantees. Our theoretical results establish the finite-sample convergence of the proposed algorithms for nonconvex optimization. Finally, numerical experiments on a practical problem demonstrate that our algorithms require over ten times fewer queries than existing methods.
翻译:零阶优化(ZO)已成为解决黑盒问题的强大框架,它利用零阶数据估计梯度以迭代更新变量。ZO的实际适用性关键取决于单步梯度估计的效率和整体查询复杂度。然而,现有ZO算法无法同时在这两方面实现高效性。本文研究具有黑盒目标函数和约束函数的一般约束优化模型。为解决该问题,我们提出了新颖算法,能够达到$\mathcal{O}(d/\epsilon^4)$的最优整体查询复杂度边界以找到$\epsilon$-稳定解($d$为变量空间维度),同时将单步梯度估计所需的查询次数从$\mathcal{O}(d)$降低至$\mathcal{O}(1)$。具体而言,我们将块更新与梯度下降上升法及块梯度估计器相结合,分别推导出ZOB-GDA和ZOB-SGDA两种算法。这些算法不构建完整梯度,而是仅沿随机维度块估计部分梯度,其中可调节的块尺寸能够在保证收敛性的前提下实现高效的单步计算。理论分析证明了所提算法在非凸优化中的有限样本收敛性。最后,通过实际问题的数值实验表明,我们的算法所需查询次数比现有方法减少十倍以上。