While large language models (LLMs) have shown remarkable capabilities in natural language processing, they struggle with complex, multi-step reasoning tasks involving knowledge graphs (KGs). Existing approaches that integrate LLMs and KGs either underutilize the reasoning abilities of LLMs or suffer from prohibitive computational costs due to tight coupling. To address these limitations, we propose a novel collaborative framework named EffiQA that can strike a balance between performance and efficiency via an iterative paradigm. EffiQA consists of three stages: global planning, efficient KG exploration, and self-reflection. Specifically, EffiQA leverages the commonsense capability of LLMs to explore potential reasoning pathways through global planning. Then, it offloads semantic pruning to a small plug-in model for efficient KG exploration. Finally, the exploration results are fed to LLMs for self-reflection to further improve the global planning and efficient KG exploration. Empirical evidence on multiple KBQA benchmarks shows EffiQA's effectiveness, achieving an optimal balance between reasoning accuracy and computational costs. We hope the proposed new framework will pave the way for efficient, knowledge-intensive querying by redefining the integration of LLMs and KGs, fostering future research on knowledge-based question answering.
翻译:尽管大语言模型(LLM)在自然语言处理方面展现出卓越能力,但在涉及知识图谱(KG)的复杂多步推理任务中仍面临挑战。现有整合LLM与KG的方法要么未能充分利用LLM的推理能力,要么因紧密耦合而产生极高的计算成本。为克服这些局限,我们提出名为EffiQA的新型协作框架,通过迭代范式在性能与效率间实现平衡。EffiQA包含三个阶段:全局规划、高效知识图谱探索与自反思。具体而言,EffiQA利用LLM的常识能力通过全局规划探索潜在推理路径,随后将语义剪枝任务卸载至小型插件模型以实现高效知识图谱探索,最终将探索结果反馈至LLM进行自反思以持续优化全局规划与知识图谱探索流程。在多个知识库问答基准测试中的实证结果表明,EffiQA在推理准确性与计算成本间达到了最优平衡。我们期望这一新框架能通过重新定义LLM与KG的整合方式,为高效知识密集型查询开辟道路,推动基于知识的问答研究发展。