Retrieval-augmented generation (RAG) enhances the question-answering (QA) abilities of large language models (LLMs) by integrating external knowledge. However, adapting general-purpose RAG systems to specialized fields such as science and medicine poses unique challenges due to distribution shifts and limited access to domain-specific data. To tackle this, we propose SimRAG, a self-training approach that equips the LLM with joint capabilities of question answering and question generation for domain adaptation. Our method first fine-tunes the LLM on instruction-following, question-answering, and search-related data. Then, it prompts the same LLM to generate diverse domain-relevant questions from unlabeled corpora, with an additional filtering strategy to retain high-quality synthetic examples. By leveraging these synthetic examples, the LLM can improve their performance on domain-specific RAG tasks. Experiments on 11 datasets, spanning two backbone sizes and three domains, demonstrate that SimRAG outperforms baselines by 1.2\%--8.6\%.
翻译:检索增强生成(RAG)通过整合外部知识,增强了大语言模型(LLM)的问答(QA)能力。然而,由于分布偏移和领域特定数据获取受限,将通用RAG系统适配到科学与医学等专业领域面临独特挑战。为此,我们提出SimRAG,一种自训练方法,使LLM具备问答与问题生成的联合能力以实现领域适配。我们的方法首先在指令遵循、问答及搜索相关数据上对LLM进行微调。随后,引导同一LLM从未标注语料中生成多样化的领域相关问题,并通过附加过滤策略保留高质量的合成示例。利用这些合成示例,LLM能够提升其在领域特定RAG任务上的性能。在涵盖两种骨干模型规模与三个领域的11个数据集上的实验表明,SimRAG以1.2\%--8.6\%的优势超越基线方法。