Recent advancements in Large Language Models (LLMs) have expanded their context windows to unprecedented lengths, sparking debates about the necessity of Retrieval-Augmented Generation (RAG). To address the fragmented evaluation paradigms and limited cases in existing Needle-in-a-Haystack (NIAH), this paper introduces U-NIAH, a unified framework that systematically compares LLMs and RAG methods in controlled long context settings. Our framework extends beyond traditional NIAH by incorporating multi-needle, long-needle, and needle-in-needle configurations, along with different retrieval settings, while leveraging the synthetic Starlight Academy dataset-a fictional magical universe-to eliminate biases from pre-trained knowledge. Through extensive experiments, we investigate three research questions: (1) performance trade-offs between LLMs and RAG, (2) error patterns in RAG, and (3) RAG's limitations in complex settings. Our findings show that RAG significantly enhances smaller LLMs by mitigating the "lost-in-the-middle" effect and improving robustness, achieving an 82.58% win-rate over LLMs. However, we observe that retrieval noise and reverse chunk ordering degrade performance, while surprisingly, advanced reasoning LLMs exhibit reduced RAG compatibility due to sensitivity to semantic distractors. We identify typical error patterns including omission due to noise, hallucination under high noise critical condition, and self-doubt behaviors. Our work not only highlights the complementary roles of RAG and LLMs, but also provides actionable insights for optimizing deployments. Code: https://github.com/Tongji-KGLLM/U-NIAH.
翻译:近期,大型语言模型(LLM)的上下文窗口已扩展至前所未有的长度,引发了关于检索增强生成(RAG)必要性的讨论。针对现有“大海捞针”(NIAH)评估范式碎片化且用例有限的问题,本文提出U-NIAH——一个在受控长上下文环境中系统比较LLM与RAG方法的统一框架。我们的框架超越了传统NIAH,引入了多针、长针及针中针配置,并结合不同检索设置,同时利用合成的“星光学院”数据集(一个虚构的魔法宇宙)以消除预训练知识带来的偏差。通过大量实验,我们探究了三个研究问题:(1)LLM与RAG之间的性能权衡,(2)RAG中的错误模式,以及(3)RAG在复杂场景下的局限性。研究结果表明,RAG通过缓解“中间迷失”效应并提升鲁棒性,显著增强了较小规模LLM的性能,相对于纯LLM取得了82.58%的胜率。然而,我们发现检索噪声和逆向分块排序会降低性能,而令人惊讶的是,具备高级推理能力的LLM因对语义干扰项的敏感性而表现出较低的RAG兼容性。我们识别了典型的错误模式,包括因噪声导致的遗漏、高噪声关键条件下的幻觉以及自我怀疑行为。本研究不仅揭示了RAG与LLM的互补作用,还为优化实际部署提供了可操作的见解。代码:https://github.com/Tongji-KGLLM/U-NIAH。