Large language models (LLMs) have achieved remarkable performance on knowledge graph question answering (KGQA) tasks by planning and interacting with knowledge graphs. However, existing methods often confuse tool utilization with knowledge reasoning, harming readability of model outputs and giving rise to hallucinatory tool invocations, which hinder the advancement of KGQA. To address this issue, we propose Memory-augmented Query Reconstruction for LLM-based Knowledge Graph Reasoning (MemQ) to decouple LLM from tool invocation tasks using LLM-built query memory. By establishing a memory module with explicit descriptions of query statements, the proposed MemQ facilitates the KGQA process with natural language reasoning and memory-augmented query reconstruction. Meanwhile, we design an effective and readable reasoning to enhance the LLM's reasoning capability in KGQA. Experimental results that MemQ achieves state-of-the-art performance on widely used benchmarks WebQSP and CWQ.
翻译:大语言模型(LLMs)通过规划并与知识图谱交互,在知识图谱问答(KGQA)任务上取得了显著性能。然而,现有方法常将工具利用与知识推理相混淆,损害了模型输出的可读性,并引发幻觉性工具调用,这阻碍了KGQA的进一步发展。为解决此问题,我们提出了基于大语言模型知识图谱推理的记忆增强查询重构(MemQ),通过利用LLM构建的查询记忆,将LLM与工具调用任务解耦。通过建立一个包含查询语句显式描述的记忆模块,所提出的MemQ能够以自然语言推理和记忆增强的查询重构来促进KGQA过程。同时,我们设计了一种有效且可读的推理机制,以增强LLM在KGQA中的推理能力。实验结果表明,MemQ在广泛使用的基准数据集WebQSP和CWQ上取得了最先进的性能。