Bayesian optimization (BO) is a popular method to optimize costly black-box functions. While traditional BO optimizes each new target task from scratch, meta-learning has emerged as a way to leverage knowledge from related tasks to optimize new tasks faster. However, existing meta-learning BO methods rely on surrogate models that suffer from scalability issues and are sensitive to observations with different scales and noise types across tasks. Moreover, they often overlook the uncertainty associated with task similarity. This leads to unreliable task adaptation when only limited observations are obtained or when the new tasks differ significantly from the related tasks. To address these limitations, we propose a novel meta-learning BO approach that bypasses the surrogate model and directly learns the utility of queries across tasks. Our method explicitly models task uncertainty and includes an auxiliary model to enable robust adaptation to new tasks. Extensive experiments show that our method demonstrates strong anytime performance and outperforms state-of-the-art meta-learning BO methods in various benchmarks.
翻译:贝叶斯优化(BO)是一种优化代价高昂的黑箱函数的流行方法。传统贝叶斯优化从零开始优化每个新目标任务,而元学习已成为利用相关任务知识以更快优化新任务的一种方式。然而,现有的元学习贝叶斯优化方法依赖于代理模型,这些模型存在可扩展性问题,并且对跨任务中不同尺度和噪声类型的观测数据敏感。此外,它们往往忽略了与任务相似性相关的不确定性。当仅获得有限观测数据或新任务与相关任务差异显著时,这会导致不可靠的任务适应。为应对这些局限性,我们提出了一种新颖的元学习贝叶斯优化方法,该方法绕过代理模型,直接学习跨任务查询的效用。我们的方法显式建模任务不确定性,并包含一个辅助模型以实现对新任务的鲁棒适应。大量实验表明,我们的方法展现出强大的随时性能,并在多种基准测试中优于最先进的元学习贝叶斯优化方法。