Transfer learning for Bayesian optimisation has generally assumed a strong similarity between optimisation tasks, with at least a subset having similar optimal inputs. This assumption can reduce computational costs, but it is violated in a wide range of optimisation problems where transfer learning may nonetheless be useful. We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian Optimisation - PLeBO - in this setting. By learning priors for the hyperparameters of the Gaussian process surrogate model we can better approximate the underlying function, especially for few function evaluations. We validate the learned priors and compare to a breadth of transfer learning approaches, using synthetic data and a recent air pollution optimisation problem as benchmarks. We show that PLeBO and prior transfer find good inputs in fewer evaluations.
翻译:贝叶斯优化的迁移学习通常假设优化任务之间存在强相似性,且至少部分任务具有相似的最优输入。这一假设虽可降低计算成本,但在许多可能仍需要迁移学习的优化问题中并不成立。我们以更弱的假设替代该假设——仅要求优化景观的形状相似,并在此框架下分析近期提出的贝叶斯优化先验学习方法PLeBO。通过学习高斯过程代理模型超参数的先验分布,我们能够更好地逼近潜在函数,尤其是在函数评估次数较少的情况下。我们验证了所学习的先验,并将其与多种迁移学习方法进行了对比,使用的基准包括合成数据与一个近期的大气污染优化问题。结果表明,PLeBO与先验迁移方法能以更少的评估次数找到优质输入。