Many real-world design problems involve optimizing an expensive black-box function $f(x)$, such as hardware design or drug discovery. Bayesian Optimization has emerged as a sample-efficient framework for this problem. However, the basic setting considered by these methods is simplified compared to real-world experimental setups, where experiments often generate a wealth of useful information. We introduce a new setting where an experiment generates high-dimensional auxiliary information $h(x)$ along with the performance measure $f(x)$; moreover, a history of previously solved tasks from the same task family is available for accelerating optimization. A key challenge of our setting is learning how to represent and utilize $h(x)$ for efficiently solving new optimization tasks beyond the task history. We develop a novel approach for this setting based on a neural model which predicts $f(x)$ for unseen designs given a few-shot context containing observations of $h(x)$. We evaluate our method on two challenging domains, robotic hardware design and neural network hyperparameter tuning, and introduce a novel design problem and large-scale benchmark for the former. On both domains, our method utilizes auxiliary feedback effectively to achieve more accurate few-shot prediction and faster optimization of design tasks, significantly outperforming several methods for multi-task optimization.
翻译:许多现实世界中的设计问题涉及优化昂贵的黑盒函数$f(x)$,例如硬件设计或药物发现。贝叶斯优化已成为解决此类问题的样本高效框架。然而,与真实实验设置相比,这些方法所考虑的基本设定过于简化,因为实验通常会产生大量有用信息。本文引入一种新设定:实验在生成性能度量$f(x)$的同时,还产生高维辅助信息$h(x)$;此外,可利用来自同一任务家族的历史已解决任务来加速优化。本设定的核心挑战在于学习如何表征和利用$h(x)$,以高效解决超出任务历史范围的新优化任务。我们基于神经模型开发了一种新颖方法,该模型通过包含$h(x)$观测值的少样本上下文来预测未见设计的$f(x)$。我们在机器人硬件设计和神经网络超参数调优两个挑战性领域评估了所提方法,并为前者引入了新的设计问题和大规模基准测试。在两个领域中,我们的方法能有效利用辅助反馈来实现更精确的少样本预测和更快速的设计任务优化,其性能显著优于多种多任务优化方法。