We present UncertaintyRAG, a novel approach for long-context Retrieval-Augmented Generation (RAG) that utilizes Signal-to-Noise Ratio (SNR)-based span uncertainty to estimate similarity between text chunks. This span uncertainty enhances model calibration, improving robustness and mitigating semantic inconsistencies introduced by random chunking. Leveraging this insight, we propose an efficient unsupervised learning technique to train the retrieval model, alongside an effective data sampling and scaling strategy. UncertaintyRAG outperforms baselines by 2.03% on LLaMA-2-7B, achieving state-of-the-art results while using only 4% of the training data compared to other advanced open-source retrieval models under distribution shift settings. Our method demonstrates strong calibration through span uncertainty, leading to improved generalization and robustness in long-context RAG tasks. Additionally, UncertaintyRAG provides a lightweight retrieval model that can be integrated into any large language model with varying context window lengths, without the need for fine-tuning, showcasing the flexibility of our approach.
翻译:本文提出UncertaintyRAG,一种用于长上下文检索增强生成(RAG)的新方法,该方法利用基于信噪比(SNR)的片段不确定性来估计文本块之间的相似性。这种片段不确定性增强了模型校准,提高了鲁棒性,并缓解了随机分块引入的语义不一致问题。基于这一洞见,我们提出了一种高效的无监督学习技术来训练检索模型,同时结合了有效的数据采样与扩展策略。在分布偏移设置下,UncertaintyRAG在LLaMA-2-7B上相比基线模型性能提升2.03%,仅使用其他先进开源检索模型4%的训练数据即达到最先进水平。我们的方法通过片段不确定性展现出强大的校准能力,从而在长上下文RAG任务中实现了更好的泛化性与鲁棒性。此外,UncertaintyRAG提供了一个轻量级检索模型,可集成到具有不同上下文窗口长度的大型语言模型中,无需微调,体现了本方法的灵活性。