Retrieval Augmented Generation (RAG) enriches the ability of language models to reason using external context to augment responses for a given user prompt. This approach has risen in popularity due to practical applications in various applications of language models in search, question/answering, and chat-bots. However, the exact nature of how this approach works isn't clearly understood. In this paper, we mechanistically examine the RAG pipeline to highlight that language models take shortcut and have a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. We probe this mechanistic behavior in language models with: (i) Causal Mediation Analysis to show that the parametric memory is minimally utilized when answering a question and (ii) Attention Contributions and Knockouts to show that the last token residual stream do not get enriched from the subject token in the question, but gets enriched from other informative tokens in the context. We find this pronounced shortcut behaviour true across both LLaMa and Phi family of models.
翻译:检索增强生成(RAG)通过利用外部上下文增强语言模型对给定用户提示的响应推理能力。该方法因在语言模型的搜索、问答及聊天机器人等多种实际应用中展现的实用性而日益普及。然而,其运作机制的本质尚未得到清晰阐释。本文通过机理分析研究RAG流程,揭示语言模型存在走捷径的倾向,即强烈偏向仅利用上下文信息回答问题,而对其参数化记忆的依赖极低。我们通过以下方法探究语言模型的这一机理行为:(i)因果中介分析表明,模型在回答问题时对参数化记忆的利用程度极低;(ii)注意力贡献与剔除实验显示,末词残差流未从问题中的主题词获取信息增益,而是从上下文的其他信息词中获取增强。我们发现这种显著的捷径行为在LLaMa和Phi系列模型中均普遍存在。