Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning. Despite their effectiveness, MPNNs face challenges such as under-reaching and over-squashing, where limited receptive fields and structural bottlenecks hinder information flow in the graph. While graph transformers hold promise in addressing these issues, their scalability is limited due to quadratic complexity regarding the number of nodes, rendering them impractical for larger graphs. Here, we propose implicitly rewired message-passing neural networks (IPR-MPNNs), a novel approach that integrates implicit probabilistic graph rewiring into MPNNs. By introducing a small number of virtual nodes, i.e., adding additional nodes to a given graph and connecting them to existing nodes, in a differentiable, end-to-end manner, IPR-MPNNs enable long-distance message propagation, circumventing quadratic complexity. Theoretically, we demonstrate that IPR-MPNNs surpass the expressiveness of traditional MPNNs. Empirically, we validate our approach by showcasing its ability to mitigate under-reaching and over-squashing effects, achieving state-of-the-art performance across multiple graph datasets. Notably, IPR-MPNNs outperform graph transformers while maintaining significantly faster computational efficiency.
翻译:消息传递图神经网络已成为图机器学习的重要范式。尽管其效果显著,但MPNN仍面临诸如"欠到达"与"过压缩"等挑战——受限的感受野与结构瓶颈会阻碍图中的信息流动。虽然图Transformer在解决这些问题上展现出潜力,但其节点数量的二次方复杂度限制了可扩展性,难以应用于大规模图结构。本文提出隐式重连消息传递神经网络,这是一种将隐式概率图重连机制融入MPNN的创新方法。通过以可微分、端到端的方式引入少量虚拟节点(即在给定图中添加额外节点并将其与现有节点连接),IPR-MPNN能够实现长距离消息传播,同时规避二次复杂度问题。理论上,我们证明IPR-MPNN的表达能力超越了传统MPNN。实验方面,我们通过展示该方法缓解"欠到达"与"过压缩"效应的能力验证了其有效性,并在多个图数据集上取得了最先进的性能表现。值得注意的是,IPR-MPNN在保持显著更快计算效率的同时,其性能超越了图Transformer。