Recently, Transformers for graph representation learning have become increasingly popular, achieving state-of-the-art performance on a wide-variety of graph datasets, either alone or in combination with message-passing graph neural networks (MP-GNNs). Infusing graph inductive-biases in the innately structure-agnostic transformer architecture in the form of structural or positional encodings (PEs) is key to achieving these impressive results. However, designing such encodings is tricky and disparate attempts have been made to engineer such encodings including Laplacian eigenvectors, relative random-walk probabilities (RRWP), spatial encodings, centrality encodings, edge encodings etc. In this work, we argue that such encodings may not be required at all, provided the attention mechanism itself incorporates information about the graph structure. We introduce Eigenformer, a Graph Transformer employing a novel spectrum-aware attention mechanism cognizant of the Laplacian spectrum of the graph, and empirically show that it achieves performance competetive with SOTA Graph Transformers on a number of standard GNN benchmarks. Additionally, we theoretically prove that Eigenformer can express various graph structural connectivity matrices, which is particularly essential when learning over smaller graphs.
翻译:近年来,用于图表示学习的Transformer日益流行,单独使用或与消息传递图神经网络(MP-GNNs)结合,已在多种图数据集上取得最先进性能。在本质上不依赖结构的Transformer架构中注入图归纳偏置(以结构或位置编码形式),是实现这些显著成果的关键。然而,设计此类编码颇具挑战性,研究者已尝试多种方案:拉普拉斯特征向量、相对随机游走概率(RRWP)、空间编码、中心性编码、边编码等。本研究认为,若注意力机制本身能融合图结构信息,则此类编码或非必需。我们提出Eigenformer——一种采用新颖的谱感知注意力机制的图Transformer,该机制能感知图的拉普拉斯谱。实验表明,在多个标准GNN基准测试中,其性能与最先进的图Transformer相当。此外,我们从理论上证明Eigenformer可表达多种图结构连通性矩阵,这对小规模图的学习尤为关键。