Retrieval augmented generation (RAG) pipelines are commonly used in tasks such as question-answering (QA), relying on retrieving relevant documents from a vector store computed using a pretrained embedding model. However, if the retrieved context is inaccurate, the answers generated using the large language model (LLM) may contain errors or hallucinations. Although pretrained embedding models have advanced, adapting them to new domains remains challenging. Fine-tuning is a potential solution, but industry settings often lack the necessary fine-tuning data. To address these challenges, we propose REFINE, a novel technique that generates synthetic data from available documents and then uses a model fusion approach to fine-tune embeddings for improved retrieval performance in new domains, while preserving out-of-domain capability. We conducted experiments on the two public datasets: SQUAD and RAG-12000 and a proprietary TOURISM dataset. Results demonstrate that even the standard fine-tuning with the proposed data augmentation technique outperforms the vanilla pretrained model. Furthermore, when combined with model fusion, the proposed approach achieves superior performance, with a 5.76% improvement in recall on the TOURISM dataset, and 6.58 % and 0.32% enhancement on SQUAD and RAG-12000 respectively.
翻译:检索增强生成(RAG)管道在问答(QA)等任务中被广泛使用,其依赖于通过预训练嵌入模型计算向量库并检索相关文档。然而,若检索到的上下文不准确,基于大语言模型(LLM)生成的答案可能出现错误或幻觉。尽管预训练嵌入模型已取得进展,使其适应新领域仍具挑战性。微调是一种潜在的解决方案,但工业场景常缺乏必要的微调数据。为应对这些挑战,我们提出REFINE——一种创新技术,该方法从可用文档生成合成数据,继而通过模型融合方式微调嵌入模型,以提升新领域中的检索性能,同时保持跨领域能力。我们在两个公开数据集SQUAD与RAG-12000以及一个专有TOURISM数据集上进行了实验。结果表明,即使采用标准微调结合本文提出的数据增强技术,其性能也优于原始预训练模型。此外,当结合模型融合时,所提方法实现了更优性能:在TOURISM数据集上召回率提升5.76%,在SQUAD和RAG-12000数据集上分别提升6.58%和0.32%。