Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention, GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g LSTM), and Dynamic Graph Transformers, on 9 datasets. Code is available at: github.com/ykrmm/SLATE.
翻译:全连接图Transformer(GT)已迅速成为静态图领域的重要替代方案,以解决消息传递模型在表达能力不足、过度挤压和覆盖范围有限等方面的问题。然而,在动态图场景中,通过自注意力机制将多个快照中的所有节点相互连接,GT会同时丢失结构和时间信息。本文提出了一种用于时空Transformer的超拉普拉斯编码方法(SLATE),这是一种新的时空编码方案,能够在利用GT架构的同时保留时空信息。具体而言,我们将离散时间动态图转换为多层图,并利用其关联的超拉普拉斯矩阵的谱特性。我们的第二个贡献是通过交叉注意力机制显式建模节点间的成对关系,为动态链接预测提供精确的边表示。在9个数据集上的实验表明,SLATE在性能上超越了众多基于消息传递图神经网络与循环模型(如LSTM)相结合的方法以及动态图Transformer等先进方法。代码已开源:github.com/ykrmm/SLATE。