Temporal link prediction in dynamic graphs is a fundamental problem in many real-world systems. Existing temporal graph neural networks mainly focus on learning representations of historical interactions. Despite their strong performance, these models are still purely discriminative, producing point estimates for future links and lacking an explicit mechanism to capture the uncertainty and sequential structure of future temporal interactions. In this paper, we propose SDG, a novel sequence-level diffusion framework that unifies dynamic graph learning with generative denoising. Specifically, SDG injects noise into the entire historical interaction sequence and jointly reconstructs all interaction embeddings through a conditional denoising process, thereby enabling the model to capture more comprehensive interaction distributions. To align the generative process with temporal link prediction, we employ a cross-attention denoising decoder to guide the reconstruction of the destination sequence and optimize the model in an end-to-end manner. Extensive experiments on various temporal graph benchmarks show that SDG consistently achieves state-of-the-art performance in the temporal link prediction task.
翻译:动态图中的时序链路预测是许多现实世界系统中的基本问题。现有的时序图神经网络主要侧重于学习历史交互的表示。尽管这些模型表现出色,但它们仍然是纯粹的判别式模型,仅对未来链路生成点估计,缺乏明确的机制来捕捉未来时序交互的不确定性和序列结构。本文提出SDG,一种新颖的序列级扩散框架,将动态图学习与生成式去噪相统一。具体而言,SDG向整个历史交互序列注入噪声,并通过条件去噪过程联合重构所有交互嵌入,从而使模型能够捕捉更全面的交互分布。为使生成过程与时序链路预测对齐,我们采用交叉注意力去噪解码器来指导目标序列的重构,并以端到端方式优化模型。在多个时序图基准上的大量实验表明,SDG在时序链路预测任务中始终取得最先进的性能。