In many real-world optimization problems, we have prior information about what objective function values are achievable. In this paper, we study the scenario that we have either exact knowledge of the minimum value or a, possibly inexact, lower bound on its value. We propose bound-aware Bayesian optimization (BABO), a Bayesian optimization method that uses a new surrogate model and acquisition function to utilize such prior information. We present SlogGP, a new surrogate model that incorporates bound information and adapts the Expected Improvement (EI) acquisition function accordingly. Empirical results on a variety of benchmarks demonstrate the benefit of taking prior information about the optimal value into account, and that the proposed approach significantly outperforms existing techniques. Furthermore, we notice that even in the absence of prior information on the bound, the proposed SlogGP surrogate model still performs better than the standard GP model in most cases, which we explain by its larger expressiveness.
翻译:在许多现实世界的优化问题中,我们对于目标函数可能达到的数值范围具有先验信息。本文研究以下场景:我们已知目标函数最小值的确切数值,或拥有一个可能不精确的下界估计。我们提出了一种边界感知贝叶斯优化方法(BABO),该方法通过新型代理模型与采集函数来有效利用此类先验信息。我们提出了SlogGP——一种融合边界信息的新型代理模型,并据此对期望改进(EI)采集函数进行适应性调整。在多种基准测试中的实证结果表明,考虑最优值的先验信息具有显著优势,所提出的方法在性能上明显优于现有技术。此外,我们注意到即使在没有边界先验信息的情况下,所提出的SlogGP代理模型在多数情况下仍优于标准高斯过程模型,我们将其归因于该模型更强的表达能力。