The general capabilities of large language models (LLMs) make them the infrastructure for various AI applications, but updating their inner knowledge requires significant resources. Recent model editing is a promising technique for efficiently updating a small amount of knowledge of LLMs and has attracted much attention. In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge. Local editing methods update weights by computing least squares closed-form solutions and identify edited knowledge by vector-level matching in inference, which achieve promising results. However, these methods still require a lot of time and resources to complete the computation. Moreover, vector-level matching lacks reliability, and such updates disrupt the original organization of the model's parameters. To address these issues, we propose an detachable and expandable Subject Word Embedding Altering (SWEA) framework, which finds the editing embeddings through token-level matching and adds them to the subject word embeddings in Transformer input. To get these editing embeddings, we propose optimizing then suppressing fusion method, which first optimizes learnable embedding vectors for the editing target and then suppresses the Knowledge Embedding Dimensions (KEDs) to obtain final editing embeddings. We thus propose SWEA$\oplus$OS method for editing factual knowledge in LLMs. We demonstrate the overall state-of-the-art (SOTA) performance of SWEA$\oplus$OS on the \textsc{CounterFact} and zsRE datasets. To further validate the reasoning ability of SWEA$\oplus$OS in editing knowledge, we evaluate it on the more complex \textsc{RippleEdits} benchmark. The results demonstrate that SWEA$\oplus$OS possesses SOTA reasoning ability.
翻译:大语言模型的通用能力使其成为各类人工智能应用的基础设施,但更新其内部知识需要大量资源。近期研究中的模型编辑技术作为一种高效更新大语言模型少量知识的方法备受关注。其中,局部编辑方法通过直接更新模型参数,更适合小规模知识更新。该类方法通过计算最小二乘闭合解更新权重,并在推理阶段通过向量级匹配识别被编辑的知识,取得了显著效果。然而,这些方法仍需耗费大量时间与资源完成计算。此外,向量级匹配缺乏可靠性,且此类更新会破坏模型参数原有的组织结构。为解决上述问题,我们提出一种可拆卸、可扩展的主题词嵌入调整(SWEA)框架,该方法通过词级匹配定位编辑嵌入,并将其注入Transformer输入的主题词嵌入中。为获取这些编辑嵌入,我们提出“先优化后抑制”融合方法:首先为编辑目标优化可学习的嵌入向量,随后抑制知识嵌入维度(KEDs)以生成最终编辑嵌入。基于此,我们提出SWEA$\oplus$OS方法用于编辑大语言模型中的事实知识。在\textsc{CounterFact}和zsRE数据集上的实验表明,SWEA$\oplus$OS达到了整体最优(SOTA)性能。为进一步验证该方法在知识编辑中的推理能力,我们在更复杂的\textsc{RippleEdits}基准上对其进行评估,结果表明SWEA$\oplus$OS具备SOTA级别的推理能力。