Multilingual knowledge editing (MKE) aims to simultaneously revise factual knowledge across multilingual languages within large language models (LLMs). However, most existing MKE methods just adapt existing monolingual editing methods to multilingual scenarios, overlooking the deep semantic connections of the same factual knowledge between different languages, thereby limiting edit performance. To address this issue, we first investigate how LLMs represent multilingual factual knowledge and discover that the same factual knowledge in different languages generally activates a shared set of neurons, which we call language-agnostic factual neurons. These neurons represent the semantic connections between multilingual knowledge and are mainly located in certain layers. Inspired by this finding, we propose a new MKE method by locating and modifying Language-Agnostic Factual Neurons (LAFN) to simultaneously edit multilingual knowledge. Specifically, we first generate a set of paraphrases for each multilingual knowledge to be edited to precisely locate the corresponding language-agnostic factual neurons. Then we optimize the update values for modifying these located neurons to achieve simultaneous modification of the same factual knowledge in multiple languages. Experimental results on Bi-ZsRE and MzsRE benchmarks demonstrate that our method outperforms existing MKE methods and achieves remarkable edit performance, indicating the importance of considering the semantic connections among multilingual knowledge.
翻译:多语言知识编辑(MKE)旨在同时修订大型语言模型(LLMs)中跨多种语言的事实知识。然而,现有的大多数MKE方法只是将单语言编辑方法适配到多语言场景,忽略了不同语言间同一事实知识的深层语义联系,从而限制了编辑性能。为解决此问题,我们首先研究了LLMs如何表示多语言事实知识,并发现不同语言中的同一事实知识通常会激活一组共享的神经元,我们称之为语言无关事实神经元。这些神经元代表了多语言知识间的语义联系,主要位于某些特定层中。受此发现启发,我们提出了一种新的MKE方法,通过定位并修改语言无关事实神经元(LAFN)来同时编辑多语言知识。具体而言,我们首先为待编辑的每条多语言知识生成一组释义,以精确定位对应的语言无关事实神经元。随后,我们优化用于修改这些定位神经元的更新值,以实现对多语言中同一事实知识的同步修改。在Bi-ZsRE和MzsRE基准测试上的实验结果表明,我们的方法优于现有的MKE方法,并取得了显著的编辑性能,这证明了考虑多语言知识间语义联系的重要性。