Dynamic graph representation learning has emerged as a crucial research area, driven by the growing need for analyzing time-evolving graph data in real-world applications. While recent approaches leveraging recurrent neural networks (RNNs) and graph neural networks (GNNs) have shown promise, they often fail to adequately capture the impact of temporal edge states on inter-node relationships, consequently overlooking the dynamic changes in node features induced by these evolving relationships. Furthermore, these methods suffer from GNNs' inherent over-smoothing problem, which hinders the extraction of global structural features. To address these challenges, we introduce the Recurrent Structure-reinforced Graph Transformer (RSGT), a novel framework for dynamic graph representation learning. It first designs a heuristic method to explicitly model edge temporal states by employing different edge types and weights based on the differences between consecutive snapshots, thereby integrating varying edge temporal states into the graph's topological structure. We then propose a structure-reinforced graph transformer that captures temporal node representations encoding both graph topology and evolving dynamics through a recurrent learning paradigm, enabling the extraction of both local and global structural features. Comprehensive experiments on four real-world datasets demonstrate RSGT's superior performance in discrete dynamic graph representation learning, consistently outperforming existing methods in dynamic link prediction tasks.
翻译:动态图表示学习已成为一个关键研究领域,这源于现实应用中对时序演化图数据分析日益增长的需求。尽管近期利用循环神经网络(RNNs)和图神经网络(GNNs)的方法已展现出潜力,但它们往往未能充分捕捉时序边缘状态对节点间关系的影响,从而忽略了由这些演化关系引起的节点特征的动态变化。此外,这些方法受限于GNN固有的过平滑问题,阻碍了全局结构特征的提取。为应对这些挑战,我们提出了循环结构增强图Transformer(RSGT),这是一种用于动态图表示学习的新型框架。该框架首先设计了一种启发式方法,通过基于连续快照间的差异采用不同的边缘类型和权重来显式建模边缘时序状态,从而将变化的边缘时序状态整合到图的拓扑结构中。随后,我们提出了一种结构增强的图Transformer,它通过循环学习范式捕获编码图拓扑和演化动态的时序节点表示,从而能够同时提取局部和全局结构特征。在四个真实世界数据集上的综合实验表明,RSGT在离散动态图表示学习中具有优越性能,在动态链接预测任务中持续超越现有方法。