Large Language Models (LLMs) have shown remarkable reasoning capabilities on complex tasks, but they still suffer from out-of-date knowledge, hallucinations, and opaque decision-making. In contrast, Knowledge Graphs (KGs) can provide explicit and editable knowledge for LLMs to alleviate these issues. Existing paradigm of KG-augmented LLM manually predefines the breadth of exploration space and requires flawless navigation in KGs. However, this paradigm cannot adaptively explore reasoning paths in KGs based on the question semantics and self-correct erroneous reasoning paths, resulting in a bottleneck in efficiency and effect. To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer. Specifically, three important mechanisms of Guidance, Memory, and Reflection are designed to work together, to guarantee the adaptive breadth of self-correcting planning for graph reasoning. Finally, extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of PoG.
翻译:大型语言模型(LLMs)在复杂任务上展现出卓越的推理能力,但仍受限于知识陈旧、幻觉问题以及决策过程不透明。相比之下,知识图谱(KGs)能为LLMs提供显式且可编辑的知识,以缓解这些问题。现有的知识图谱增强LLM范式通常需要人工预先定义探索空间的广度,并要求在知识图谱中进行无差错的导航。然而,该范式无法根据问题语义自适应地探索知识图谱中的推理路径,也无法对错误的推理路径进行自我校正,从而导致效率与效果的瓶颈。为克服这些局限,我们提出了一种新颖的、用于知识图谱增强LLM的自校正自适应规划范式,称为图规划(Plan-on-Graph,PoG)。该方法首先将问题分解为若干子目标,随后重复执行以下过程:自适应探索推理路径、更新记忆、反思是否需要自我校正错误的推理路径,直至获得答案。具体而言,我们设计了引导、记忆与反思三个关键机制协同工作,以确保图推理过程中自校正规划的自适应广度。最后,在三个真实世界数据集上的大量实验验证了PoG的有效性与高效性。