Recent advancements in Large Language Models (LLMs) have significantly improved their performance across various Natural Language Processing (NLP) tasks. However, LLMs still struggle with generating non-factual responses due to limitations in their parametric memory. Retrieval-Augmented Generation (RAG) systems address this issue by incorporating external knowledge with a retrieval module. Despite their successes, however, current RAG systems face challenges with retrieval failures and the limited ability of LLMs to filter out irrelevant information. Therefore, in this work, we propose DSLR (Document Refinement with Sentence-Level Re-ranking and Reconstruction), an unsupervised framework that decomposes retrieved documents into sentences, filters out irrelevant sentences, and reconstructs them again into coherent passages. We experimentally validate DSLR on multiple open-domain QA datasets and the results demonstrate that DSLR significantly enhances the RAG performance over conventional fixed-size passage. Furthermore, our DSLR enhances performance in specific, yet realistic scenarios without the need for additional training, providing an effective and efficient solution for refining retrieved documents in RAG systems.
翻译:近年来,大型语言模型(LLMs)在各类自然语言处理(NLP)任务上的性能取得了显著提升。然而,由于参数化记忆的局限性,LLMs在生成非事实性回答方面仍存在困难。检索增强生成(RAG)系统通过引入检索模块整合外部知识,以应对这一问题。尽管取得了成功,但当前的RAG系统仍面临检索失败以及LLMs过滤无关信息能力有限等挑战。为此,本文提出DSLR(基于句子级重排序与重构的文档精炼方法),这是一种无监督框架,能够将检索到的文档分解为句子,过滤无关句子,并重新将其重构为连贯的段落。我们在多个开放域问答数据集上对DSLR进行了实验验证,结果表明DSLR相比传统的固定长度段落检索方法,显著提升了RAG的性能。此外,我们的DSLR在特定且现实的场景中无需额外训练即可提升性能,为RAG系统中检索文档的精炼提供了一种高效且有效的解决方案。