Large Language Models (LLMs) have shown proficiency in question-answering tasks but often struggle to integrate real-time knowledge, leading to potentially outdated or inaccurate responses. This problem becomes even more challenging when dealing with multi-hop questions, since they require LLMs to update and integrate multiple knowledge pieces relevant to the questions. To tackle the problem, we propose the Retrieval-Augmented model Editing (RAE) framework for multi-hop question answering. RAE first retrieves edited facts and then refines the language model through in-context learning. Specifically, our retrieval approach, based on mutual information maximization, leverages the reasoning abilities of LLMs to identify chain facts that traditional similarity-based searches might miss. In addition, our framework includes a pruning strategy to eliminate redundant information from the retrieved facts, which enhances the editing accuracy and mitigates the hallucination problem. Our framework is supported by theoretical justification for its fact retrieval efficacy. Finally, comprehensive evaluation across various LLMs validates RAE's ability in providing accurate answers with updated knowledge. Our code is available at: https://github.com/sycny/RAE.
翻译:大型语言模型(LLMs)在问答任务中表现出色,但往往难以整合实时知识,导致可能产生过时或不准确的回答。在处理多跳问题时,这一挑战尤为突出,因为此类问题要求LLMs更新并整合与问题相关的多个知识片段。为解决该问题,我们提出了用于多跳问答的检索增强模型编辑(RAE)框架。RAE首先检索编辑后的事实,随后通过上下文学习优化语言模型。具体而言,我们基于互信息最大化的检索方法,利用LLMs的推理能力来识别传统基于相似性的搜索可能遗漏的链式事实。此外,我们的框架包含剪枝策略,以消除检索事实中的冗余信息,从而提升编辑精度并缓解幻觉问题。该框架在事实检索效能方面获得了理论依据的支持。最后,通过对多种LLMs的综合评估,验证了RAE在利用更新知识提供准确回答方面的能力。我们的代码公开于:https://github.com/sycny/RAE。