Temporal Knowledge Graph Forecasting (TKGF) aims to predict future events based on the observed events in history. Recently, Large Language Models (LLMs) have exhibited remarkable capabilities, generating significant research interest in their application for reasoning over temporal knowledge graphs (TKGs). Existing LLM-based methods have integrated retrieved historical facts or static graph representations into LLMs. Despite the notable performance of LLM-based methods, they are limited by the insufficient modeling of temporal patterns and ineffective cross-modal alignment between graph and language, hindering the ability of LLMs to fully grasp the temporal and structural information in TKGs. To tackle these issues, we propose a novel framework TGL-LLM to integrate temporal graph learning into LLM-based temporal knowledge graph model. Specifically, we introduce temporal graph learning to capture the temporal and relational patterns and obtain the historical graph embedding. Furthermore, we design a hybrid graph tokenization to sufficiently model the temporal patterns within LLMs. To achieve better alignment between graph and language, we employ a two-stage training paradigm to finetune LLMs on high-quality and diverse data, thereby resulting in better performance. Extensive experiments on three real-world datasets show that our approach outperforms a range of state-of-the-art (SOTA) methods.
翻译:时序知识图谱预测(TKGF)旨在基于历史观测事件预测未来事件。近年来,大型语言模型(LLMs)展现出卓越能力,引发了其在时序知识图谱(TKGs)推理应用中的广泛研究兴趣。现有基于LLM的方法已将检索到的历史事实或静态图表示整合到LLM中。尽管基于LLM的方法性能显著,但其仍受限于对时序模式建模不足以及图与语言间跨模态对齐效果不佳,阻碍了LLM充分理解TKGs中时序与结构信息的能力。为解决这些问题,我们提出一种新颖框架TGL-LLM,将时序图学习融入基于LLM的时序知识图谱模型。具体而言,我们引入时序图学习以捕获时序与关系模式,并获取历史图嵌入。此外,我们设计了混合图标记化方法,以在LLM内部充分建模时序模式。为实现图与语言间更好的对齐,我们采用两阶段训练范式,在高质量多样化数据上微调LLM,从而获得更优性能。在三个真实世界数据集上的大量实验表明,我们的方法优于一系列最先进的(SOTA)方法。