Question Answering over Temporal Knowledge Graphs (TKGQA) has attracted growing interest for handling time-sensitive queries. However, existing methods still struggle with: 1) weak incorporation of temporal constraints in question representation, causing biased reasoning; 2) limited ability to perform explicit multi-hop reasoning; and 3) suboptimal fusion of language and graph representations. We propose a novel framework with temporal-aware question encoding, multi-hop graph reasoning, and multi-view heterogeneous information fusion. Specifically, our approach introduces: 1) a constraint-aware question representation that combines semantic cues from language models with temporal entity dynamics; 2) a temporal-aware graph neural network for explicit multi-hop reasoning via time-aware message passing; and 3) a multi-view attention mechanism for more effective fusion of question context and temporal graph knowledge. Experiments on multiple TKGQA benchmarks demonstrate consistent improvements over multiple baselines.
翻译:时序知识图谱问答(TKGQA)因处理时间敏感查询而日益受到关注。然而,现有方法仍面临以下挑战:1)问题表示中对时序约束的融入不足,导致推理偏差;2)执行显式多跳推理的能力有限;3)语言与图谱表示融合效果欠佳。我们提出一种新颖框架,包含时序感知问题编码、多跳图推理以及多视图异质信息融合。具体而言,我们的方法引入了:1)一种约束感知问题表示,将语言模型的语义线索与时序实体动态相结合;2)一种时序感知图神经网络,通过时间感知消息传递实现显式多跳推理;3)一种多视图注意力机制,用于更有效地融合问题上下文与时序图谱知识。在多个TKGQA基准测试上的实验表明,该方法相较于多种基线模型均取得了持续改进。