Large Language Models (LLMs) possess impressive reasoning abilities but are prone to generating incorrect information, often referred to as hallucinations. While incorporating external Knowledge Graphs (KGs) can partially mitigate this issue, existing methods primarily treat KGs as static knowledge repositories, overlooking the critical disparity between KG and LLM knowledge, and failing to fully exploit the reasoning capabilities inherent in KGs. To address these limitations, we propose Pyramid-Driven Alignment (PDA), a novel framework for seamlessly integrating LLMs with KGs. PDA utilizes Pyramid Principle analysis to construct a hierarchical pyramid structure. This structure is designed to reflect the input question and generate more validated deductive knowledge, thereby enhancing the alignment of LLMs and KGs and ensuring more cohesive integration. Furthermore, PDA employs a recursive mechanism to harness the underlying reasoning abilities of KGs, resulting in more accurate knowledge retrieval for question-answering tasks. Our experimental results reveal a substantial performance advantage of PDA over state-of-the-art baselines, with improvements reaching 26.70% and 26.78%.
翻译:大语言模型具备强大的推理能力,但容易产生错误信息,这种现象常被称为“幻觉”。虽然引入外部知识图谱可以部分缓解此问题,但现有方法主要将知识图谱视为静态知识库,忽视了知识图谱与大语言模型知识之间的关键差异,且未能充分利用知识图谱固有的推理能力。为克服这些局限,我们提出金字塔驱动对齐,一种新颖的、用于无缝融合大语言模型与知识图谱的框架。该框架运用金字塔原理分析构建层次化金字塔结构。该结构旨在反映输入问题并生成更多经过验证的演绎知识,从而增强大语言模型与知识图谱的对齐,确保更紧密的融合。此外,PDA采用递归机制以利用知识图谱底层的推理能力,从而为问答任务实现更准确的知识检索。我们的实验结果表明,PDA相较于现有先进基线方法具有显著的性能优势,提升幅度分别达到26.70%和26.78%。