Graph neural networks (GNNs) with unsupervised learning can solve large-scale combinatorial optimization problems (COPs) with efficient time complexity, making them versatile for various applications. However, since this method maps the combinatorial optimization problem to the training process of a graph neural network, and the current mainstream backpropagation-based training algorithms are prone to fall into local minima, the optimization performance is still inferior to the current state-of-the-art (SOTA) COP methods. To address this issue, inspired by possibly chaotic dynamics of real brain learning, we introduce a chaotic training algorithm, i.e. chaotic graph backpropagation (CGBP), which introduces a local loss function in GNN that makes the training process not only chaotic but also highly efficient. Different from existing methods, we show that the global ergodicity and pseudo-randomness of such chaotic dynamics enable CGBP to learn each optimal GNN effectively and globally, thus solving the COP efficiently. We have applied CGBP to solve various COPs, such as the maximum independent set, maximum cut, and graph coloring. Results on several large-scale benchmark datasets showcase that CGBP can outperform not only existing GNN algorithms but also SOTA methods. In addition to solving large-scale COPs, CGBP as a universal learning algorithm for GNNs, i.e. as a plug-in unit, can be easily integrated into any existing method for improving the performance.
翻译:采用无监督学习的图神经网络(GNNs)能够以高效的时间复杂度解决大规模组合优化问题(COPs),使其适用于多种应用场景。然而,由于该方法将组合优化问题映射到图神经网络的训练过程,而当前主流的基于反向传播的训练算法容易陷入局部极小值,其优化性能仍逊于当前最先进的(SOTA)组合优化方法。为解决这一问题,受真实大脑学习过程中可能存在的混沌动力学启发,我们引入了一种混沌训练算法,即混沌图反向传播(CGBP)。该算法在GNN中引入了一个局部损失函数,使得训练过程不仅具有混沌特性,而且高效。与现有方法不同,我们证明了这种混沌动力学的全局遍历性和伪随机性使CGBP能够有效且全局地学习每个最优GNN,从而高效解决组合优化问题。我们将CGBP应用于解决多种组合优化问题,如最大独立集、最大割和图的着色。在多个大规模基准数据集上的结果表明,CGBP不仅优于现有的GNN算法,也超越了SOTA方法。除了解决大规模组合优化问题,CGBP作为一种通用的GNN学习算法(即作为一个即插即用单元),可以轻松集成到任何现有方法中以提升性能。