In many real-world optimization problems, we have prior information about what objective function values are achievable. In this paper, we study the scenario that we have either exact knowledge of the minimum value or a, possibly inexact, lower bound on its value. We propose bound-aware Bayesian optimization (BABO), a Bayesian optimization method that uses a new surrogate model and acquisition function to utilize such prior information. We present SlogGP, a new surrogate model that incorporates bound information and adapts the Expected Improvement (EI) acquisition function accordingly. Empirical results on a variety of benchmarks demonstrate the benefit of taking prior information about the optimal value into account, and that the proposed approach significantly outperforms existing techniques. Furthermore, we notice that even in the absence of prior information on the bound, the proposed SlogGP surrogate model still performs better than the standard GP model in most cases, which we explain by its larger expressiveness.
翻译:在许多现实世界的优化问题中,我们拥有关于目标函数可达到值的先验信息。本文研究以下场景:我们或已知最小值的确切信息,或拥有一个可能不精确的取值下界。我们提出边界感知贝叶斯优化(BABO),这是一种利用此类先验信息的新型代理模型与采集函数的贝叶斯优化方法。我们提出SlogGP——一种融合边界信息的新型代理模型,并相应调整了期望改进(EI)采集函数。在多种基准测试上的实证结果表明,考虑关于最优值的先验信息具有显著优势,且所提方法明显优于现有技术。此外我们注意到,即使在缺乏边界先验信息的情况下,所提出的SlogGP代理模型在多数情况下仍优于标准GP模型,我们将其归因于其更强的表达能力。