Large language models (LLMs) store extensive factual knowledge, but the mechanisms behind how they store and express this knowledge remain unclear. The Knowledge Neuron (KN) thesis is a prominent theory for explaining these mechanisms. This theory is based on the Knowledge Localization (KL) assumption, which suggests that a fact can be localized to a few knowledge storage units, namely knowledge neurons. However, this assumption has two limitations: first, it may be too rigid regarding knowledge storage, and second, it neglects the role of the attention module in knowledge expression. In this paper, we first re-examine the KL assumption and demonstrate that its limitations do indeed exist. To address these, we then present two new findings, each targeting one of the limitations: one focusing on knowledge storage and the other on knowledge expression. We summarize these findings as \textbf{Query Localization} (QL) assumption and argue that the KL assumption can be viewed as a simplification of the QL assumption. Based on QL assumption, we further propose the Consistency-Aware KN modification method, which improves the performance of knowledge modification, further validating our new assumption. We conduct 39 sets of experiments, along with additional visualization experiments, to rigorously confirm our conclusions. Code is available at https://github.com/heng840/KnowledgeLocalization.
翻译:大型语言模型(LLM)存储了大量事实性知识,但其存储和表达这些知识的机制仍不明确。知识神经元(KN)理论是解释这些机制的一个重要理论。该理论基于知识定位(KL)假设,该假设认为一个事实可以被定位到少数知识存储单元,即知识神经元。然而,这一假设存在两个局限性:首先,它可能对知识存储的界定过于僵化;其次,它忽视了注意力模块在知识表达中的作用。在本文中,我们首先重新审视了KL假设,并证明其局限性确实存在。为了解决这些问题,我们随后提出了两项新发现,分别针对这两个局限性:一项关注知识存储,另一项关注知识表达。我们将这些发现总结为\textbf{查询定位}(QL)假设,并认为KL假设可被视为QL假设的一种简化。基于QL假设,我们进一步提出了“一致性感知KN修改方法”,该方法提升了知识修改的性能,进一步验证了我们的新假设。我们进行了39组实验以及额外的可视化实验,以严谨地证实我们的结论。代码可在 https://github.com/heng840/KnowledgeLocalization 获取。