Graph neural networks (GNNs) are commonly divided into message-passing neural networks (MPNNs) and spectral graph neural networks, reflecting two largely separate research traditions in machine learning and signal processing. This paper argues that this divide is mostly artificial, hindering progress in the field. We propose a viewpoint in which both MPNNs and spectral GNNs are understood as different parametrizations of permutation-equivariant operators acting on graph signals. From this perspective, many popular architectures are equivalent in expressive power, while genuine gaps arise only in specific regimes. We further argue that MPNNs and spectral GNNs offer complementary strengths. That is, MPNNs provide a natural language for discrete structure and expressivity analysis using tools from logic and graph isomorphism research, while the spectral perspective provides principled tools for understanding smoothing, bottlenecks, stability, and community structure. Overall, we posit that progress in graph learning will be accelerated by clearly understanding the key similarities and differences between these two types of GNNs, and by working towards unifying these perspectives within a common theoretical and conceptual framework rather than treating them as competing paradigms.
翻译:图神经网络(GNNs)通常被划分为消息传递神经网络(MPNNs)与谱图神经网络,这反映了机器学习与信号处理领域中两个基本独立的研究传统。本文认为这种划分在很大程度上是人为的,阻碍了该领域的进展。我们提出一种观点,将MPNNs与谱GNNs均理解为作用于图信号的置换等变算子的不同参数化形式。基于这一视角,许多流行架构在表达能力上是等价的,而真正的差异仅出现在特定机制中。我们进一步指出,MPNNs与谱GNNs具有互补优势:MPNNs为离散结构提供了自然的描述语言,并能借助逻辑学与图同构研究中的工具进行表达能力分析;而谱视角则为理解平滑性、瓶颈效应、稳定性及社区结构提供了原理性工具。总体而言,我们认为通过清晰理解这两类GNNs的核心共性与差异,并致力于在统一的理论与概念框架下整合这两种视角而非将其视为竞争范式,将加速图学习领域的发展。