Bayesian Optimization is a popular approach for optimizing expensive black-box functions. Its key idea is to use a surrogate model to approximate the objective and, importantly, quantify the associated uncertainty that allows a sequential search of query points that balance exploitation-exploration. Gaussian process (GP) has been a primary candidate for the surrogate model, thanks to its Bayesian-principled uncertainty quantification power and modeling flexibility. However, its challenges have also spurred an array of alternatives whose convergence properties could be more opaque. Motivated by these, we study in this paper an axiomatic framework that elicits the minimal requirements to guarantee black-box optimization convergence that could apply beyond GP-based methods. Moreover, we leverage the design freedom in our framework, which we call Pseudo-Bayesian Optimization, to construct empirically superior algorithms. In particular, we show how using simple local regression, and a suitable "randomized prior" construction to quantify uncertainty, not only guarantees convergence but also consistently outperforms state-of-the-art benchmarks in examples ranging from high-dimensional synthetic experiments to realistic hyperparameter tuning and robotic applications.
翻译:贝叶斯优化是优化昂贵黑盒函数的常用方法。其核心思想是使用代理模型逼近目标函数,并关键地量化相关不确定性,从而通过序贯搜索查询点来平衡利用与探索。高斯过程因其贝叶斯原理的不确定性量化能力和建模灵活性,已成为代理模型的主要选择。然而,其局限性也催生了众多替代方法,这些方法的收敛性质可能更为隐晦。受此启发,本文研究一个公理化框架,该框架通过推导保证黑盒优化收敛的最小要求,可适用于超越基于高斯过程的方法。此外,我们利用该框架(称为伪贝叶斯优化)中的设计自由度,构建了经验上更优越的算法。具体而言,我们展示了如何通过使用简单的局部回归与合适的“随机化先验”构造来量化不确定性,不仅能够保证收敛性,还在从高维合成实验到实际超参数调优及机器人应用的一系列案例中,持续超越最先进的基准方法。