Incorporating spectral information to enhance Graph Neural Networks (GNNs) has shown promising results but raises a fundamental challenge due to the inherent ambiguity of eigenvectors. Various architectures have been proposed to address this ambiguity, referred to as spectral invariant architectures. Notable examples include GNNs and Graph Transformers that use spectral distances, spectral projection matrices, or other invariant spectral features. However, the potential expressive power of these spectral invariant architectures remains largely unclear. The goal of this work is to gain a deep theoretical understanding of the expressive power obtainable when using spectral features. We first introduce a unified message-passing framework for designing spectral invariant GNNs, called Eigenspace Projection GNN (EPNN). A comprehensive analysis shows that EPNN essentially unifies all prior spectral invariant architectures, in that they are either strictly less expressive or equivalent to EPNN. A fine-grained expressiveness hierarchy among different architectures is also established. On the other hand, we prove that EPNN itself is bounded by a recently proposed class of Subgraph GNNs, implying that all these spectral invariant architectures are strictly less expressive than 3-WL. Finally, we discuss whether using spectral features can gain additional expressiveness when combined with more expressive GNNs.
翻译:将谱信息融入图神经网络(GNNs)以增强其性能已展现出有前景的结果,但由于特征向量固有的歧义性,这也带来了根本性挑战。为应对此歧义性,学界提出了多种被称为谱不变架构的模型。值得注意的实例包括使用谱距离、谱投影矩阵或其他不变谱特征的GNNs和图Transformer。然而,这些谱不变架构的潜在表达能力在很大程度上仍不明确。本工作的目标是深入理论理解使用谱特征所能获得的表达能力。我们首先提出了一个用于设计谱不变GNN的统一消息传递框架,称为特征空间投影图神经网络(EPNN)。综合分析表明,EPNN本质上统一了所有先前的谱不变架构,因为它们要么表达能力严格弱于EPNN,要么与EPNN等价。同时,我们在不同架构间建立了细粒度的表达能力层次结构。另一方面,我们证明EPNN本身受限于最近提出的一类子图GNNs,这意味着所有这些谱不变架构的表达能力都严格弱于3-WL。最后,我们探讨了当谱特征与表达能力更强的GNNs结合时,是否能获得额外的表达能力。