Single-stage neural combinatorial optimization solvers have achieved near-optimal results on various small-scale combinatorial optimization (CO) problems without requiring expert knowledge. However, these solvers exhibit significant performance degradation when applied to large-scale CO problems. Recently, two-stage neural methods motivated by divide-and-conquer strategies have shown efficiency in addressing large-scale CO problems. Nevertheless, the performance of these methods highly relies on problem-specific heuristics in either the dividing or the conquering procedure, which limits their applicability to general CO problems. Moreover, these methods employ separate training schemes and ignore the interdependencies between the dividing and conquering strategies, often leading to sub-optimal solutions. To tackle these drawbacks, this article develops a unified neural divide-and-conquer framework (i.e., UDC) for solving general large-scale CO problems. UDC offers a Divide-Conquer-Reunion (DCR) training method to eliminate the negative impact of a sub-optimal dividing policy. Employing a high-efficiency Graph Neural Network (GNN) for global instance dividing and a fixed-length sub-path solver for conquering divided sub-problems, the proposed UDC framework demonstrates extensive applicability, achieving superior performance in 10 representative large-scale CO problems. The code is available at https://github.com/CIAM-Group/NCO_code/tree/main/single_objective/UDC-Large-scale-CO-master.
翻译:单阶段神经组合优化求解器无需专家知识即可在各种小规模组合优化问题上取得接近最优的结果。然而,当应用于大规模组合优化问题时,这些求解器表现出显著的性能下降。最近,受分治策略启发的两阶段神经方法在处理大规模组合优化问题上显示出效率。然而,这些方法的性能高度依赖于划分过程或求解过程中的问题特定启发式规则,这限制了它们对一般组合优化问题的适用性。此外,这些方法采用独立的训练方案,忽略了划分策略与求解策略之间的相互依赖关系,常常导致次优解。为了解决这些缺点,本文开发了一种用于求解通用大规模组合优化问题的统一神经分治框架(即UDC)。UDC提供了一种划分-求解-重组训练方法,以消除次优划分策略的负面影响。该框架采用高效的图神经网络进行全局实例划分,并使用固定长度的子路径求解器来处理划分后的子问题。所提出的UDC框架展现了广泛的适用性,在10个代表性的大规模组合优化问题上取得了优越的性能。代码可在 https://github.com/CIAM-Group/NCO_code/tree/main/single_objective/UDC-Large-scale-CO-master 获取。