Model editing techniques are essential for efficiently updating knowledge in large language models (LLMs). However, the effectiveness of existing approaches degrades in massive editing scenarios, particularly when evaluated with practical metrics. Their robustness is also limited in context-rich settings or when editing multiple facts of the same subject simultaneously. We attribute these failures to the embedding misalignment among knowledge items, which undermines editing reliability at scale. To address this, we propose EAMET (Embedding Alignment Model Editing in Transformers), which addresses this issue by aligning the space of key and residual embeddings. Extensive experiments across six LLMs and three datasets demonstrate that EAMET consistently outperforms existing methods, achieving about 90\% editing efficacy when editing 10k facts. Codes and datasets are publicly available at https://ybdai7.github.io/eamet-page/.
翻译:模型编辑技术对于高效更新大型语言模型(LLMs)中的知识至关重要。然而,现有方法在大规模编辑场景下的效果会下降,尤其是在使用实际指标进行评估时。在上下文丰富的环境中,或同时编辑同一主题的多个事实时,其鲁棒性也有限。我们将这些失败归因于知识项之间的嵌入未对齐,这损害了大规模编辑的可靠性。为解决这一问题,我们提出了EAMET(Transformer中的嵌入对齐模型编辑),它通过对齐键嵌入和残差嵌入的空间来解决此问题。在六个LLMs和三个数据集上的大量实验表明,EAMET始终优于现有方法,在编辑10k个事实时达到约90%的编辑效能。代码和数据集已在 https://ybdai7.github.io/eamet-page/ 公开提供。