Retrieval-Augmented Generation (RAG) addresses large language model (LLM) hallucinations by grounding responses in external knowledge, but its effectiveness is compromised by poor-quality retrieved contexts containing irrelevant or noisy information. While existing approaches attempt to improve performance through context selection based on predefined context quality assessment metrics, they show limited gains over standard RAG. We attribute this limitation to their failure in holistically utilizing available information (query, context list, and generator) for comprehensive quality assessment. Inspired by recent advances in data selection, we reconceptualize context quality assessment as an inference-time data valuation problem and introduce the Contextual Influence Value (CI value). This novel metric quantifies context quality by measuring the performance degradation when removing each context from the list, effectively integrating query-aware relevance, list-aware uniqueness, and generator-aware alignment. Moreover, CI value eliminates complex selection hyperparameter tuning by simply retaining contexts with positive CI values. To address practical challenges of label dependency and computational overhead, we develop a parameterized surrogate model for CI value prediction during inference. The model employs a hierarchical architecture that captures both local query-context relevance and global inter-context interactions, trained through oracle CI value supervision and end-to-end generator feedback. Extensive experiments across 8 NLP tasks and multiple LLMs demonstrate that our context selection method significantly outperforms state-of-the-art baselines, effectively filtering poor-quality contexts while preserving critical information. Code is available at https://github.com/SJTU-DMTai/RAG-CSM.
翻译:检索增强生成(RAG)通过将回答建立在外部知识基础上来缓解大语言模型(LLM)的幻觉问题,但其效果常因检索到的上下文质量不佳(包含无关或噪声信息)而受限。现有方法试图通过基于预定义上下文质量评估指标的上下文选择来提升性能,但相较于标准RAG的提升有限。我们认为这一局限源于其未能整体利用可用信息(查询、上下文列表及生成器)进行全面的质量评估。受数据选择领域最新进展的启发,我们将上下文质量评估重新定义为推理时的数据估值问题,并提出了上下文影响力值(CI值)。该新颖指标通过衡量从列表中移除每个上下文时导致的性能下降来量化上下文质量,有效整合了查询感知的相关性、列表感知的独特性以及生成器感知的对齐度。此外,CI值通过简单保留具有正CI值的上下文,消除了复杂的超参数调优需求。针对实际应用中标签依赖性和计算开销的挑战,我们开发了一种参数化的代理模型用于推理时预测CI值。该模型采用分层架构,既能捕捉局部查询-上下文相关性,又能建模全局上下文间交互,通过真实CI值监督和端到端生成器反馈进行训练。在8个NLP任务和多种LLM上的大量实验表明,我们的上下文选择方法显著优于现有先进基线,能有效过滤低质量上下文同时保留关键信息。代码发布于 https://github.com/SJTU-DMTai/RAG-CSM。