Large language models (LLMs) equipped with retrieval--the Retrieval-Augmented Generation (RAG) paradigm--should combine their parametric knowledge with external evidence, yet in practice they often hallucinate, over-trust noisy snippets, or ignore vital context. We introduce TCR (Transparent Conflict Resolution), a plug-and-play framework that makes this decision process observable and controllable. TCR (i) disentangles semantic match and factual consistency via dual contrastive encoders, (ii) estimates self-answerability to gauge confidence in internal memory, and (iii) feeds the three scalar signals to the generator through a lightweight soft-prompt with SNR-based weighting. Across seven benchmarks TCR improves conflict detection (+5-18 F1), raises knowledge-gap recovery by +21.4 pp and cuts misleading-context overrides by -29.3 pp, while adding only 0.3% parameters. The signals align with human judgements and expose temporal decision patterns.
翻译:配备检索功能的大型语言模型(LLMs)——即检索增强生成(RAG)范式——本应将其参数化知识与外部证据相结合,但在实践中它们常常产生幻觉、过度信任噪声片段或忽略关键上下文。我们提出TCR(透明冲突解决框架),一种即插即用的框架,使这一决策过程可观察且可控。TCR(i)通过双对比编码器解耦语义匹配与事实一致性,(ii)估计自应答性以衡量对内部记忆的置信度,(iii)将这三个标量信号通过基于信噪比加权的轻量级软提示馈送给生成器。在七个基准测试中,TCR将冲突检测性能提升(+5-18 F1),知识缺口恢复率提高+21.4个百分点,误导性上下文覆盖减少-29.3个百分点,而仅增加0.3%的参数。这些信号与人类判断一致,并揭示了时序决策模式。