Retrieval-Augmented Generation (RAG) has been shown to enhance the factual accuracy of Large Language Models (LLMs), but existing methods often suffer from limited reasoning capabilities in effectively using the retrieved evidence, particularly when using open-source LLMs. To mitigate this gap, we introduce a novel framework, Open-RAG, designed to enhance reasoning capabilities in RAG with open-source LLMs. Our framework transforms an arbitrary dense LLM into a parameter-efficient sparse mixture of experts (MoE) model capable of handling complex reasoning tasks, including both single- and multi-hop queries. Open-RAG uniquely trains the model to navigate challenging distractors that appear relevant but are misleading. As a result, Open-RAG leverages latent learning, dynamically selecting relevant experts and integrating external knowledge effectively for more accurate and contextually relevant responses. In addition, we propose a hybrid adaptive retrieval method to determine retrieval necessity and balance the trade-off between performance gain and inference speed. Experimental results show that the Llama2-7B-based Open-RAG outperforms state-of-the-art LLMs and RAG models such as ChatGPT, Self-RAG, and Command R+ in various knowledge-intensive tasks. We open-source our code and models at https://openragmoe.github.io/
翻译:检索增强生成(RAG)已被证明能提升大语言模型(LLM)的事实准确性,但现有方法在使用检索证据时的推理能力往往有限,尤其是在使用开源LLM时。为弥补这一不足,我们提出了一种新颖的框架——Open-RAG,旨在增强开源LLM在RAG中的推理能力。该框架将任意稠密LLM转换为一种参数高效的稀疏专家混合(MoE)模型,能够处理包括单跳和多跳查询在内的复杂推理任务。Open-RAG独特地训练模型以应对看似相关但具有误导性的困难干扰项。因此,Open-RAG利用潜在学习机制,动态选择相关专家并有效整合外部知识,从而生成更准确且与上下文更相关的回答。此外,我们提出了一种混合自适应检索方法,用于判断检索的必要性,并在性能增益与推理速度之间取得平衡。实验结果表明,基于Llama2-7B的Open-RAG在多种知识密集型任务中,其性能超越了包括ChatGPT、Self-RAG和Command R+在内的当前最先进的LLM和RAG模型。我们的代码和模型已在 https://openragmoe.github.io/ 开源。