Multi-Objective Optimization (MOO) is an important problem in real-world applications. However, for a non-trivial problem, no single solution exists that can optimize all the objectives simultaneously. In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives. Scalarization in MOO is a well-established method for finding a finite set approximation of the whole Pareto set (PS). However, in real-world experimental design scenarios, it's beneficial to obtain the whole PS for flexible exploration of the design space. Recently Pareto set learning (PSL) has been introduced to approximate the whole PS. PSL involves creating a manifold representing the Pareto front of a multi-objective optimization problem. A naive approach includes finding discrete points on the Pareto front through randomly generated preference vectors and connecting them by regression. However, this approach is computationally expensive and leads to a poor PS approximation. We propose to optimize the preference points to be distributed evenly on the Pareto front. Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods. We demonstrated the efficacy of our method for complex and difficult black-box MOO problems using both synthetic and real-world benchmark data.
翻译:多目标优化(MOO)是现实应用中的重要问题。然而,对于非平凡问题,不存在能够同时优化所有目标的单一解。在典型的MOO问题中,目标是找到一组权衡目标间偏好的最优解(帕累托集)。MOO中的标量化是获取整个帕累托集有限近似集的成熟方法。然而,在实际实验设计场景中,获取整个帕累托集有助于灵活探索设计空间。最近提出的帕累托集学习(PSL)方法旨在近似整个帕累托集。PSL通过构建表征多目标优化问题帕累托前沿的流形来实现。一种简单方法包括通过随机生成的偏好向量寻找帕累托前沿上的离散点,并通过回归方法连接这些点。但该方法计算成本高昂且帕累托集近似效果较差。我们提出优化偏好点使其均匀分布于帕累托前沿。该形式化方法导出了一个双层优化问题,可通过可微分交叉熵等方法求解。我们使用合成数据与真实基准数据验证了该方法在复杂困难的黑盒MOO问题中的有效性。