Bayesian optimization is a popular framework for efficiently tackling black-box search problems. As a rule, these algorithms operate by iteratively choosing what to evaluate next until some predefined budget has been exhausted. We investigate replacing this de facto stopping rule with criteria based on the probability that a point satisfies a given set of conditions. We focus on the prototypical example of an $(\epsilon, \delta)$-criterion: stop when a solution has been found whose value is within $\epsilon > 0$ of the optimum with probability at least $1 - \delta$ under the model. For Gaussian process priors, we show that Bayesian optimization satisfies this criterion under mild technical assumptions. Further, we give a practical algorithm for evaluating Monte Carlo stopping rules in a manner that is both sample efficient and robust to estimation error. These findings are accompanied by empirical results which demonstrate the strengths and weaknesses of the proposed approach.
翻译:贝叶斯优化是处理黑箱搜索问题的常用高效框架。通常,这些算法通过迭代选择下一个评估点来运行,直到耗尽预设的评估预算。我们研究用基于概率的准则来替代这种默认的停止规则,该概率衡量一个点满足给定条件集的可能性。我们聚焦于典型的 $(\epsilon, \delta)$ 准则示例:当在模型下以至少 $1 - \delta$ 的概率找到一个其值在最优解 $\epsilon > 0$ 范围内的解时停止。对于高斯过程先验,我们证明在温和的技术假设下,贝叶斯优化满足该准则。此外,我们提出了一种实用算法,以既高效采样又对估计误差稳健的方式评估蒙特卡洛停止规则。这些发现辅以实证结果,展示了所提出方法的优势与局限性。