In traditional RAG framework, the basic retrieval units are normally short. The common retrievers like DPR normally work with 100-word Wikipedia paragraphs. Such a design forces the retriever to search over a large corpus to find the `needle' unit. In contrast, the readers only need to generate answers from the short retrieved units. The imbalanced `heavy' retriever and `light' reader design can lead to sub-optimal performance. The loss of contextual information in the short, chunked units may increase the likelihood of introducing hard negatives during the retrieval stage. Additionally, the reader might not fully leverage the capabilities of recent advancements in LLMs. In order to alleviate the imbalance, we propose a new framework LongRAG, consisting of a `long retriever' and a `long reader'. In the two Wikipedia-based datasets, NQ and HotpotQA, LongRAG processes the entire Wikipedia corpus into 4K-token units by grouping related documents. By increasing the unit size, we significantly reduce the total number of units. This greatly reduces the burden on the retriever, resulting in strong retrieval performance with only a few (less than 8) top units. Without requiring any training, LongRAG achieves an EM of 62.7% on NQ and 64.3% on HotpotQA, which are on par with the (fully-trained) SoTA model. Furthermore, we test on two non-Wikipedia-based datasets, Qasper and MultiFieldQA-en. LongRAG processes each individual document as a single (long) unit rather than chunking them into smaller units. By doing so, we achieve an F1 score of 25.9% on Qasper and 57.5% on MultiFieldQA-en. Our study offers insights into the future roadmap for combining RAG with long-context LLMs.
翻译:在传统的RAG框架中,基本检索单元通常较短。常见的检索器(如DPR)通常处理约100词的维基百科段落。这种设计迫使检索器在大型语料库中搜索以找到“针状”单元。相比之下,阅读器仅需从较短的检索单元中生成答案。这种“重”检索器与“轻”阅读器的不平衡设计可能导致次优性能。短小分块单元中上下文信息的丢失可能增加检索阶段引入困难负样本的可能性。此外,阅读器可能无法充分利用大语言模型最新进展的能力。为缓解这种不平衡,我们提出了新框架LongRAG,它由“长检索器”和“长阅读器”组成。在两个基于维基百科的数据集NQ和HotpotQA上,LongRAG通过聚合相关文档将整个维基百科语料库处理为4K词元的单元。通过增加单元规模,我们显著减少了单元总数。这极大减轻了检索器的负担,仅用少量(少于8个)顶部单元即可实现强大的检索性能。在无需任何训练的情况下,LongRAG在NQ上达到62.7%的EM值,在HotpotQA上达到64.3%,与(完全训练的)最先进模型性能相当。此外,我们在两个非维基百科数据集Qasper和MultiFieldQA-en上进行了测试。LongRAG将每个独立文档处理为单个(长)单元而非切分为小单元。通过这种方式,我们在Qasper上获得25.9%的F1分数,在MultiFieldQA-en上获得57.5%的F1分数。本研究为RAG与长上下文大语言模型的未来结合路线提供了重要见解。