Knowledge graphs (KGs), with their structured representation capabilities, offer promising avenue for enhancing Retrieval Augmented Generation (RAG) systems, leading to the development of KG-RAG systems. Nevertheless, existing methods often struggle to achieve effective synergy between system effectiveness and cost efficiency, leading to neither unsatisfying performance nor excessive LLM prompt tokens and inference time. To this end, this paper proposes REMINDRAG, which employs an LLM-guided graph traversal featuring node exploration, node exploitation, and, most notably, memory replay, to improve both system effectiveness and cost efficiency. Specifically, REMINDRAG memorizes traversal experience within KG edge embeddings, mirroring the way LLMs "memorize" world knowledge within their parameters, but in a train-free manner. We theoretically and experimentally confirm the effectiveness of REMINDRAG, demonstrating its superiority over existing baselines across various benchmark datasets and LLM backbones. Our code is available at https://github.com/kilgrims/ReMindRAG.
翻译:知识图谱凭借其结构化表示能力,为增强检索增强生成系统提供了有前景的途径,从而推动了KG-RAG系统的发展。然而,现有方法往往难以实现系统效能与成本效率之间的有效协同,导致性能不尽如人意或产生过量的LLM提示词元与推理时间。为此,本文提出ReMindRAG,它采用LLM引导的图谱遍历策略,包含节点探索、节点利用以及尤为关键的记忆回放机制,以同时提升系统效能与成本效率。具体而言,ReMindRAG将遍历经验记忆于知识图谱边嵌入中,这种方式模仿了LLM在其参数中“记忆”世界知识的过程,但无需训练。我们从理论和实验上验证了ReMindRAG的有效性,证明了其在多种基准数据集和LLM骨干模型上均优于现有基线方法。我们的代码公开于 https://github.com/kilgrims/ReMindRAG。