Multi-objective combinatorial optimization (MOCO) problems are prevalent in various real-world applications. Most existing neural methods for MOCO problems rely solely on decomposition and utilize precise hypervolume to enhance diversity. However, these methods often approximate only limited regions of the Pareto front and spend excessive time on diversity enhancement because of ambiguous decomposition and time-consuming hypervolume calculation. To address these limitations, we design a Geometry-Aware Pareto set Learning algorithm named GAPL, which provides a novel geometric perspective for neural MOCO via a Pareto attention model based on hypervolume expectation maximization. In addition, we propose a hypervolume residual update strategy to enable the Pareto attention model to capture both local and non-local information of the Pareto set/front. We also design a novel inference approach to further improve quality of the solution set and speed up hypervolume calculation and local subset selection. Experimental results on three classic MOCO problems demonstrate that our GAPL outperforms state-of-the-art neural baselines via superior decomposition and efficient diversity enhancement.
翻译:多目标组合优化问题在各类现实应用中普遍存在。现有大多数神经方法依赖分解策略,并利用精确超体积提升多样性,但由于分解不明确与超体积计算耗时,这些方法往往仅能近似帕累托前沿的有限区域,且在多样性增强环节耗费过多时间。为解决上述局限,我们设计了一种名为GAPL的几何感知帕累托集学习算法,通过基于超体积期望最大化的帕累托注意力模型,为神经多目标组合优化提供新颖的几何视角。此外,我们提出超体积残差更新策略,使帕累托注意力模型能同时捕捉帕累托集/前沿的局部与非局部信息。同时设计新型推理方法,在提升解集质量的同时加速超体积计算与局部子集选择。在三个经典多目标组合优化问题上的实验结果表明,我们的GAPL通过更优的分解策略与高效的多样性增强,显著超越现有最先进的神经基线方法。