Constrained Bayesian optimization (CBO) methods have seen significant success in black-box optimization with constraints. One of the most commonly used CBO methods is the constrained expected improvement (CEI) algorithm. CEI is a natural extension of expected improvement (EI) when constraints are incorporated. However, the theoretical convergence rate of CEI has not been established. In this work, we study the convergence rate of CEI by analyzing its simple regret upper bound. First, we show that when the objective function $f$ and constraint function $c$ are assumed to each lie in a reproducing kernel Hilbert space (RKHS), CEI achieves the convergence rates of $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right) \ \text{and }\ \mathcal{O}\left(t^{\frac{-ν}{2ν+d}} \log^{\fracν{2ν+d}}(t)\right)$ for the commonly used squared exponential and Matérn kernels ($ν>\frac{1}{2}$), respectively. Second, we show that when $f$ is assumed to be sampled from Gaussian processes (GPs), CEI achieves similar convergence rates with a high probability. Numerical experiments are performed to validate the theoretical analysis.
翻译:约束贝叶斯优化(CBO)方法在带约束的黑盒优化问题中取得了显著成功。其中,约束期望改进(CEI)算法是最常用的CBO方法之一。CEI是期望改进(EI)在纳入约束条件后的自然推广。然而,CEI的理论收敛率尚未得到确立。本文通过分析其简单遗憾上界来研究CEI的收敛率。首先,我们证明当目标函数 $f$ 和约束函数 $c$ 均假设位于再生核希尔伯特空间(RKHS)中时,对于常用的平方指数核与Matérn核($ν>\frac{1}{2}$),CEI分别达到 $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right)$ 与 $\mathcal{O}\left(t^{\frac{-ν}{2ν+d}} \log^{\fracν{2ν+d}}(t)\right)$ 的收敛率。其次,我们证明当 $f$ 假设从高斯过程(GP)中采样时,CEI以高概率达到类似的收敛率。数值实验验证了理论分析。