Knowledge Graphs (KGs) are foundational structures in many AI applications, representing entities and their interrelations through triples. However, triple-based KGs lack the contextual information of relational knowledge, like temporal dynamics and provenance details, which are crucial for comprehensive knowledge representation and effective reasoning. Instead, \textbf{Contextual Knowledge Graphs} (CKGs) expand upon the conventional structure by incorporating additional information such as time validity, geographic location, and source provenance. This integration provides a more nuanced and accurate understanding of knowledge, enabling KGs to offer richer insights and support more sophisticated reasoning processes. In this work, we first discuss the inherent limitations of triple-based KGs and introduce the concept of contextual KGs, highlighting their advantages in knowledge representation and reasoning. We then present \textbf{KGR$^3$, a context-enriched KG reasoning paradigm} that leverages large language models (LLMs) to retrieve candidate entities and related contexts, rank them based on the retrieved information, and reason whether sufficient information has been obtained to answer a query. Our experimental results demonstrate that KGR$^3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks, validating the effectiveness of incorporating contextual information on KG representation and reasoning.
翻译:知识图谱(KGs)是许多人工智能应用的基础结构,通过三元组表示实体及其相互关系。然而,基于三元组的KGs缺乏关系知识的上下文信息,例如时间动态性和来源细节,而这些对于全面的知识表示和有效推理至关重要。相比之下,**上下文知识图谱**(CKGs)通过整合时间有效性、地理位置和来源出处等附加信息,扩展了传统结构。这种整合提供了对知识更细致、更准确的理解,使KGs能够提供更丰富的洞见并支持更复杂的推理过程。在本工作中,我们首先讨论了基于三元组的KGs的固有局限性,并引入了上下文KGs的概念,强调了其在知识表示和推理方面的优势。随后,我们提出了**KGR$^3$,一种上下文增强的KG推理范式**,该范式利用大语言模型(LLMs)来检索候选实体及相关上下文,基于检索到的信息对其进行排序,并推理是否已获得足够信息来回答查询。我们的实验结果表明,KGR$^3$在KG补全(KGC)和KG问答(KGQA)任务上的性能显著提升,验证了在KG表示和推理中融入上下文信息的有效性。