Knowledge Graphs (KGs) are foundational structures in many AI applications, representing entities and their interrelations through triples. However, triple-based KGs lack the contextual information of relational knowledge, like temporal dynamics and provenance details, which are crucial for comprehensive knowledge representation and effective reasoning. Instead, \textbf{Context Graphs} (CGs) expand upon the conventional structure by incorporating additional information such as time validity, geographic location, and source provenance. This integration provides a more nuanced and accurate understanding of knowledge, enabling KGs to offer richer insights and support more sophisticated reasoning processes. In this work, we first discuss the inherent limitations of triple-based KGs and introduce the concept of CGs, highlighting their advantages in knowledge representation and reasoning. We then present a context graph reasoning \textbf{CGR$^3$} paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts, rank them based on the retrieved information, and reason whether sufficient information has been obtained to answer a query. Our experimental results demonstrate that CGR$^3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks, validating the effectiveness of incorporating contextual information on KG representation and reasoning.
翻译:知识图谱(KGs)是许多人工智能应用中的基础结构,通过三元组表示实体及其相互关系。然而,基于三元组的KGs缺乏关系知识的上下文信息,如时间动态性和来源细节,而这些对于全面的知识表示和有效推理至关重要。相比之下,**上下文图**(CGs)通过整合时间有效性、地理位置和来源出处等附加信息,扩展了传统结构。这种整合提供了对知识更细致、更准确的理解,使KGs能够提供更丰富的洞察并支持更复杂的推理过程。在本工作中,我们首先讨论了基于三元组的KGs的固有局限性,并引入了CGs的概念,强调了其在知识表示和推理方面的优势。随后,我们提出了一种上下文图推理**CGR$^3$**范式,该范式利用大语言模型(LLMs)检索候选实体及相关上下文,根据检索到的信息对其进行排序,并推理是否已获得足够信息以回答查询。我们的实验结果表明,CGR$^3$在知识图谱补全(KGC)和知识图谱问答(KGQA)任务上显著提升了性能,验证了在KG表示和推理中融入上下文信息的有效性。