Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps by leveraging established temporal structural knowledge. This paper aims to provide a comprehensive perspective on harnessing the advantages of Large Language Models (LLMs) for reasoning in temporal knowledge graphs, presenting an easily transferable pipeline. In terms of graph modality, we underscore the LLMs' prowess in discerning the structural information of pivotal nodes within the historical chain. As for the generation mode of the LLMs utilized for inference, we conduct an exhaustive exploration into the variances induced by a range of inherent factors in LLMs, with particular attention to the challenges in comprehending reverse logic. We adopt a parameter-efficient fine-tuning strategy to harmonize the LLMs with the task requirements, facilitating the learning of the key knowledge highlighted earlier. Comprehensive experiments are undertaken on several widely recognized datasets, revealing that our framework exceeds or parallels existing methods across numerous popular metrics. Additionally, we execute a substantial range of ablation experiments and draw comparisons with several advanced commercial LLMs, to investigate the crucial factors influencing LLMs' performance in structured temporal knowledge inference tasks.
翻译:时间知识图谱补全(TKGC)是一项复杂任务,旨在通过利用已建立的时间结构知识,预测未来时间戳上缺失的事件链接。本文旨在提供一个综合性视角,探索如何利用大语言模型(LLMs)在时间知识图谱中进行推理,并提出一种易于迁移的流水线方法。在图模态方面,我们强调LLMs在历史链中识别关键节点结构信息的能力。针对LLMs用于推理的生成模式,我们深入探究了LLMs内在因素所引发的差异,特别关注逆向逻辑理解中的挑战。我们采用参数高效微调策略,使LLMs与任务需求对齐,从而促进对上述关键知识的学习。在多个广泛认可的数据集上进行了全面实验,结果表明,我们的框架在众多常用指标上超越或接近现有方法。此外,我们执行了大量消融实验,并与多个先进的商业LLMs进行对比,以探究影响LLMs在结构化时间知识推理任务中性能的关键因素。