The important challenge of keeping knowledge in Large Language Models (LLMs) up-to-date has led to the development of various methods for incorporating new facts. However, existing methods for such knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for Large Language Models (GMeLLo), a straightforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. Beyond merely leveraging LLMs for question answering, GMeLLo employs these models to convert free-form language into structured queries and fact triples, facilitating seamless interaction with KGs for rapid updates and precise multi-hop reasoning. Our results show that GMeLLo significantly surpasses current state-of-the-art (SOTA) knowledge editing methods in the multi-hop question answering benchmark, MQuAKE, especially in scenarios with extensive knowledge edits.
翻译:保持大型语言模型(LLM)知识时效性这一重要挑战,催生了多种融入新事实的方法。然而,现有知识编辑方法在处理多跳问题时仍面临困难,这类问题需要在大量事实更新中实现准确的事实识别与序列逻辑推理。为应对这些挑战,本文提出了基于图记忆的大型语言模型编辑方法(GMeLLo),这是一种简洁高效的方法,将知识图谱(KG)的显式知识表示与LLM的语言灵活性相结合。GMeLLo不仅利用LLM进行问答,还运用这些模型将自由形式的语言转化为结构化查询与事实三元组,从而实现与知识图谱的无缝交互,以支持快速更新与精确的多跳推理。实验结果表明,在多跳问答基准测试MQuAKE中,GMeLLo显著超越了当前最先进的知识编辑方法,尤其是在涉及大量知识编辑的场景中。