Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs), especially for knowledge-intensive tasks. Despite its advantages, current RAG methods often struggle to fully exploit knowledge during generation. In particular, the synergy between the model's internal parametric knowledge and external retrieved knowledge remains limited. Retrieved contents may sometimes mislead generation, while certain generated content can guide the model toward more accurate outputs. In this work, we propose Collaborative Chain-of-Agents, a framework designed to enhance explicitly synergy over both parametric and retrieved knowledge. Specifically, we first introduce CoCoA-zero, a multi-agent RAG framework that first performs conditional knowledge induction and then reasons answers. Building on this, we develop CoCoA, a long-chain training strategy that synthesizes extended multi-agent reasoning trajectories from CoCoA-zero to fine-tune the LLM. This strategy enhances the model's capability to explicitly integrate and jointly leverage parametric and retrieved knowledge. Experimental results demonstrate the superiority of CoCoA in open-domain QA and multi-hop QA.
翻译:检索增强生成(RAG)技术显著提升了大型语言模型(LLM)在知识密集型任务中的表现。尽管具备优势,现有RAG方法在生成过程中往往难以充分挖掘知识潜力,特别是模型内部参数化知识与外部检索知识之间的协同效应仍存在局限。检索内容有时可能误导生成过程,而特定生成内容却能引导模型产生更准确的输出。本研究提出协作式智能体链框架,旨在显式增强参数化知识与检索知识间的协同机制。具体而言,我们首先提出CoCoA-zero——一种多智能体RAG框架,该框架先执行条件化知识归纳,继而进行答案推演。在此基础上,我们开发了CoCoA长链训练策略,通过合成CoCoA-zero产生的扩展多智能体推理轨迹对LLM进行微调。该策略显著提升了模型显式整合并联合利用参数化知识与检索知识的能力。实验结果表明,CoCoA在开放域问答和多跳问答任务中均展现出优越性能。