High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make it more performant in this setting, commonly by imposing various simplifying assumptions on the objective. In this paper, we identify the degeneracies that make vanilla Bayesian optimization poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of lowering the model complexity. Moreover, we propose an enhancement to the prior assumptions that are typical to vanilla Bayesian optimization algorithms, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior with the dimensionality - reveals that standard Bayesian optimization works drastically better than previously thought in high dimensions, clearly outperforming existing state-of-the-art algorithms on multiple commonly considered real-world high-dimensional tasks.
翻译:高维问题长期以来被视为贝叶斯优化算法的致命弱点。受维度诅咒的驱动,大量算法试图通过强加各种目标函数简化假设来提升其在此类场景中的性能。本文首先识别导致朴素贝叶斯优化难以适应高维任务的本质缺陷,进而通过降低模型复杂度的视角阐释现有算法如何应对这些缺陷。此外,我们针对典型朴素贝叶斯优化算法的先验假设提出改进方案,该方案能在不对目标函数施加结构性限制的前提下,将复杂度降至可管理水平。我们的改进——对高斯过程长度尺度先验进行随维度变化的简单缩放——证明标准贝叶斯优化在高维场景中的表现远超既往认知,在多个公认的真实世界高维任务中明显优于现有最先进算法。