Despite the impressive performance of large language models (LLMs) pretrained on vast knowledge corpora, advancing their knowledge manipulation-the ability to effectively recall, reason, and transfer relevant knowledge-remains challenging. Existing methods mainly leverage Supervised Fine-Tuning (SFT) on labeled datasets to enhance LLMs' knowledge manipulation ability. However, we observe that SFT models still exhibit the known&incorrect phenomenon, where they explicitly possess relevant knowledge for a given question but fail to leverage it for correct answers. To address this challenge, we propose KALE (Knowledge-Aware LEarning)-a post-training framework that leverages knowledge graphs (KGs) to generate high-quality rationales and enhance LLMs' knowledge manipulation ability. Specifically, KALE first introduces a Knowledge-Induced (KI) data synthesis method that efficiently extracts multi-hop reasoning paths from KGs to generate high-quality rationales for question-answer pairs. Then, KALE employs a Knowledge-Aware (KA) fine-tuning paradigm that enhances knowledge manipulation by internalizing rationale-guided reasoning through minimizing the KL divergence between predictions with and without rationales. Extensive experiments on eight popular benchmarks across six different LLMs demonstrate the effectiveness of KALE, achieving accuracy improvements of up to 11.72% and an average of 4.18%.
翻译:尽管在庞大知识语料库上预训练的大型语言模型(LLMs)表现出令人印象深刻的性能,但提升其知识操纵能力——即有效回忆、推理和迁移相关知识的能力——仍然具有挑战性。现有方法主要利用在标注数据集上的监督微调(SFT)来增强LLMs的知识操纵能力。然而,我们观察到SFT模型仍表现出“已知但错误”的现象,即它们明确拥有给定问题的相关知识,却未能利用这些知识得出正确答案。为应对这一挑战,我们提出了KALE(知识感知学习)——一个利用知识图谱(KGs)生成高质量推理依据并增强LLMs知识操纵能力的后训练框架。具体而言,KALE首先引入了一种知识诱导(KI)数据合成方法,该方法高效地从知识图谱中提取多跳推理路径,为问答对生成高质量的推理依据。随后,KALE采用一种知识感知(KA)微调范式,通过最小化模型在有推理依据与无推理依据情况下的预测之间的KL散度,将推理依据引导的推理过程内化,从而增强知识操纵能力。在六个不同LLMs上对八个流行基准进行的广泛实验证明了KALE的有效性,其准确率最高提升达11.72%,平均提升4.18%。