Recent advances in Graph Neural Networks (GNNs) and Graph Transformers (GTs) have been driven by innovations in architectures and Positional Encodings (PEs), which are critical for augmenting node features and capturing graph topology. PEs are essential for GTs, where topological information would otherwise be lost without message-passing. However, PEs are often tested alongside novel architectures, making it difficult to isolate their effect on established models. To address this, we present a comprehensive benchmark of PEs in a unified framework that includes both message-passing GNNs and GTs. We also establish theoretical connections between MPNNs and GTs and introduce a sparsified GRIT attention mechanism to examine the influence of global connectivity. Our findings demonstrate that previously untested combinations of GNN architectures and PEs can outperform existing methods and offer a more comprehensive picture of the state-of-the-art. To support future research and experimentation in our framework, we make the code publicly available.
翻译:图神经网络(GNNs)与图Transformer(GTs)的最新进展主要源于架构创新与位置编码(PEs)的改进,后者对于增强节点特征与捕捉图拓扑结构至关重要。位置编码对图Transformer尤为重要,因为若无消息传递机制,拓扑信息将在此类模型中丢失。然而,位置编码常与新型架构一同测试,导致难以在已有模型中独立评估其效果。为此,我们在一个统一框架内建立了涵盖消息传递图神经网络与图Transformer的全面位置编码基准。我们同时建立了消息传递神经网络与图Transformer之间的理论联系,并引入了一种稀疏化的GRIT注意力机制以探究全局连通性的影响。研究结果表明,先前未经测试的图神经网络架构与位置编码组合能够超越现有方法,并为当前最优技术提供了更全面的评估视角。为支持未来在本框架内的研究与实验,我们已公开相关代码。