Temporal knowledge graphs (TKGs) support reasoning over time-evolving facts, yet state-of-the-art models are often computationally heavy and costly to deploy. Existing compression and distillation techniques are largely designed for static graphs; directly applying them to temporal settings may overlook time-dependent interactions and lead to performance degradation. We propose an LLM-assisted distillation framework specifically designed for temporal knowledge graph reasoning. Beyond a conventional high-capacity temporal teacher, we incorporate a large language model as an auxiliary instructor to provide enriched supervision. The LLM supplies broad background knowledge and temporally informed signals, enabling a lightweight student to better model event dynamics without increasing inference-time complexity. Training is conducted by jointly optimizing supervised and distillation objectives, using a staged alignment strategy to progressively integrate guidance from both teachers. Extensive experiments on multiple public TKG benchmarks with diverse backbone architectures demonstrate that the proposed approach consistently improves link prediction performance over strong distillation baselines, while maintaining a compact and efficient student model. The results highlight the potential of large language models as effective teachers for transferring temporal reasoning capability to resource-efficient TKG systems.
翻译:时序知识图谱(TKGs)支持对随时间演化的事实进行推理,然而最先进的模型通常计算量大且部署成本高昂。现有的压缩与蒸馏技术主要针对静态图设计;直接将其应用于时序场景可能忽略时间相关的交互并导致性能下降。我们提出了一种专为时序知识图谱推理设计的LLM辅助蒸馏框架。除传统的高容量时序教师模型外,我们引入大型语言模型作为辅助指导器以提供更丰富的监督信息。LLM提供广泛的背景知识与时序感知信号,使轻量级学生模型能够在不增加推理时复杂度的前提下更好地建模事件动态。训练过程通过联合优化监督目标与蒸馏目标实现,采用分阶段对齐策略逐步融合来自两位教师的指导。在多个公开TKG基准数据集上,使用不同骨干架构的广泛实验表明,所提方法在保持学生模型紧凑高效的同时,持续优于强蒸馏基线模型的链接预测性能。该结果凸显了大型语言模型作为高效教师,将时序推理能力迁移至资源受限TKG系统的潜力。