Large Language Models (LLMs) possess vast amounts of knowledge within their parameters, prompting research into methods for locating and editing this knowledge. Previous work has largely focused on locating entity-related (often single-token) facts in smaller models. However, several key questions remain unanswered: (1) How can we effectively locate query-relevant neurons in contemporary autoregressive LLMs, such as Llama and Mistral? (2) How can we address the challenge of long-form text generation? (3) Are there localized knowledge regions in LLMs? In this study, we introduce Query-Relevant Neuron Cluster Attribution (QRNCA), a novel architecture-agnostic framework capable of identifying query-relevant neurons in LLMs. QRNCA allows for the examination of long-form answers beyond triplet facts by employing the proxy task of multi-choice question answering. To evaluate the effectiveness of our detected neurons, we build two multi-choice QA datasets spanning diverse domains and languages. Empirical evaluations demonstrate that our method outperforms baseline methods significantly. Further, analysis of neuron distributions reveals the presence of visible localized regions, particularly within different domains. Finally, we show potential applications of our detected neurons in knowledge editing and neuron-based prediction.
翻译:大型语言模型(LLMs)在其参数中蕴含海量知识,这促使了针对定位与编辑此类知识方法的研究。先前工作主要集中于在较小模型中定位与实体相关的(通常是单标记的)事实。然而,几个关键问题仍未得到解答:(1)我们如何能在当代自回归LLMs(如Llama和Mistral)中有效定位查询相关神经元?(2)我们如何应对长文本生成的挑战?(3)LLMs中是否存在局部化的知识区域?在本研究中,我们提出了查询相关神经元簇归因(QRNCA),这是一种新颖的、与架构无关的框架,能够识别LLMs中的查询相关神经元。QRNCA通过采用多项选择题回答的代理任务,使得对超越三元组事实的长篇答案进行检验成为可能。为评估所检测神经元的有效性,我们构建了两个涵盖不同领域和语言的多项选择题问答数据集。实证评估表明,我们的方法显著优于基线方法。此外,对神经元分布的分析揭示了可见局部化区域的存在,尤其是在不同领域内。最后,我们展示了所检测神经元在知识编辑和基于神经元的预测中的潜在应用。