Bayesian optimization (BO) for high-dimensional constrained problems remains a significant challenge due to the curse of dimensionality. We propose Local Constrained Bayesian Optimization (LCBO), a novel framework tailored for such settings. Unlike trust-region methods that are prone to premature shrinking when confronting tight or complex constraints, LCBO leverages the differentiable landscape of constraint-penalized surrogates to alternate between rapid local descent and uncertainty-driven exploration. Theoretically, we prove that LCBO achieves a convergence rate for the Karush-Kuhn-Tucker (KKT) residual that depends polynomially on the dimension $d$ for common kernels under mild assumptions, offering a rigorous alternative to global BO where regret bounds typically scale exponentially. Extensive evaluations on high-dimensional benchmarks (up to 100D) demonstrate that LCBO consistently outperforms state-of-the-art baselines.
翻译:高维约束问题的贝叶斯优化(BO)因维度诅咒而仍是一项重大挑战。本文提出局部约束贝叶斯优化(LCBO),一种专为此类场景设计的新型框架。与信赖域方法在面临紧致或复杂约束时易过早收缩不同,LCBO利用约束惩罚代理模型的可微分特性,在快速局部下降与不确定性驱动探索之间交替进行。理论上,我们证明在温和假设下,对于常见核函数,LCBO对Karush-Kuhn-Tucker(KKT)残差能达到与维度$d$呈多项式关系的收敛速率,这为全局BO(其遗憾界通常呈指数级缩放)提供了严格的理论替代方案。在高维基准测试(最高达100维)上的广泛评估表明,LCBO始终优于最先进的基线方法。