Multi-objective Bayesian optimization aims to find the Pareto front of trade-offs between a set of expensive objectives while collecting as few samples as possible. In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective. This decoupling of the objectives presents an opportunity to learn the Pareto front faster by avoiding unnecessary, expensive evaluations. We propose a scalarization based knowledge gradient acquisition function which accounts for the different evaluation costs of the objectives. We prove asymptotic consistency of the estimator of the optimum for an arbitrary, D-dimensional, real compact search space and show empirically that the algorithm performs comparably with the state of the art and significantly outperforms versions which always evaluate both objectives.
翻译:多目标贝叶斯优化旨在通过尽可能少的样本采集,找到一组高成本目标间权衡的帕累托前沿。在某些情况下,目标函数可被独立评估,且每个目标可能对应不同的延迟或评估成本。这种目标解耦特性为通过避免不必要的高成本评估来加速帕累托前沿学习提供了可能。本文提出一种基于标量化的知识梯度采集函数,该函数能充分考虑各目标函数的差异化评估成本。我们证明了该算法在任意D维实数紧致搜索空间中对最优解估计量的渐近一致性,并通过实验表明:该算法性能与当前最优方法相当,且显著优于始终同步评估双目标函数的基准版本。