Many real-world applications require solving families of expensive multi-objective optimization problems~(EMOPs) under varying operational conditions. This gives rise to parametric expensive multi-objective optimization problems (P-EMOPs) where each task parameter defines a distinct optimization instance. Current multi-objective Bayesian optimization methods have been widely used for finding finite sets of Pareto optimal solutions for individual tasks. However, P-EMOPs present a fundamental challenge: the continuous task parameter space can contain infinite distinct problems, each requiring separate expensive evaluations. This demands learning an inverse model that can directly predict optimized solutions for any task-preference query without expensive re-evaluation. This paper introduces a novel parametric multi-task multi-objective Bayesian optimizer that learns this inverse model by alternating between (1) acquisition-driven search leveraging inter-task synergies and (2) generative solution sampling via conditional generative models. This approach enables efficient optimization across related tasks and finally achieves direct solution prediction for unseen parameterized EMOPs without additional expensive evaluations. We theoretically justify the faster convergence by leveraging inter-task synergies through task-aware Gaussian processes. Meanwhile, based on that, empirical studies of our optimizer and inverse model in synthetic and real-world benchmarks further verify the effectiveness of the proposed generative alternating framework.
翻译:许多实际应用需要在不同运行条件下求解一系列昂贵的多目标优化问题(EMOPs),这催生了参数化昂贵多目标优化问题(P-EMOPs),其中每个任务参数定义一个独立的优化实例。当前的多目标贝叶斯优化方法已广泛用于为单个任务寻找有限的帕累托最优解集。然而,P-EMOPs提出了一个根本性挑战:连续的任务参数空间可能包含无限个不同问题,每个问题都需要独立的昂贵评估。这要求学习一个能够直接针对任意任务-偏好查询预测优化解的逆模型,而无需进行昂贵的重新评估。本文提出一种新颖的参数化多任务多目标贝叶斯优化器,该优化器通过交替进行以下两个步骤来学习该逆模型:(1)利用任务间协同效应的获取驱动搜索;(2)通过条件生成模型进行生成式解采样。该方法能够高效优化相关任务,并最终实现对未见参数化EMOPs的直接解预测,无需额外的昂贵评估。我们通过任务感知高斯过程利用任务间协同效应,从理论上证明了更快的收敛速度。同时,基于此,我们在合成与真实场景基准测试中对优化器及逆模型进行的实证研究进一步验证了所提出的生成式交替框架的有效性。