In language models (LMs), intra-memory knowledge conflict largely arises when inconsistent information about the same event is encoded within the model's parametric knowledge. While prior work has primarily focused on resolving conflicts between a model's internal knowledge and external resources through approaches such as fine-tuning or knowledge editing, the problem of localizing conflicts that originate during pre-training within the model's internal representations remain unexplored. In this work, we design a framework based on mechanistic interpretability methods to identify where and how conflicting knowledge from the pre-training data is encoded within LMs. Our findings contribute to a growing body of evidence that specific internal components of a language model are responsible for encoding conflicting knowledge from pre-training, and we demonstrate how mechanistic interpretability methods can be leveraged to causally intervene in and control conflicting knowledge at inference time.
翻译:在语言模型中,当关于同一事件的不一致信息被编码至模型的参数化知识内部时,便会产生内部记忆知识冲突。先前的研究主要通过微调或知识编辑等方法,致力于解决模型内部知识与外部资源之间的冲突,然而对于定位那些在预训练阶段产生、并存在于模型内部表征中的冲突问题,至今仍未得到探索。本研究设计了一个基于机制可解释性方法的框架,旨在识别语言模型中编码预训练数据冲突知识的位置与方式。我们的研究结果进一步证实了语言模型中特定的内部组件负责编码来自预训练阶段的冲突知识,并展示了如何利用机制可解释性方法在推理阶段对冲突知识进行因果干预与控制。