Large language models (LLMs) require continual knowledge updates to stay abreast of the ever-changing world facts, prompting the formulation of lifelong model editing task. While recent years have witnessed the development of various techniques for single and batch editing, these methods either fail to apply or perform sub-optimally when faced with lifelong editing. In this paper, we introduce LEMoE, an advanced Mixture of Experts (MoE) adaptor for lifelong model editing. We first analyze the factors influencing the effectiveness of conventional MoE adaptor in lifelong editing, including catastrophic forgetting, inconsistent routing and order sensitivity. Based on these insights, we propose a tailored module insertion method to achieve lifelong editing, incorporating a novel KV anchor routing to enhance routing consistency between training and inference stage, along with a concise yet effective clustering-based editing order planning. Experimental results demonstrate the effectiveness of our method in lifelong editing, surpassing previous model editing techniques while maintaining outstanding performance in batch editing task. Our code will be available.
翻译:大型语言模型(LLMs)需要持续的知识更新以跟上不断变化的世界事实,这推动了终身模型编辑任务的提出。尽管近年来已发展出多种适用于单次和批量编辑的技术,但这些方法在面临终身编辑时要么无法适用,要么表现欠佳。本文提出LEMoE,一种面向终身模型编辑的先进专家混合(MoE)适配器。我们首先分析了传统MoE适配器在终身编辑中效果受限的因素,包括灾难性遗忘、路由不一致和顺序敏感性。基于这些发现,我们提出一种定制化的模块插入方法以实现终身编辑,其中包含一种新颖的KV锚定路由机制以增强训练与推理阶段的路由一致性,以及一种简洁而有效的基于聚类的编辑顺序规划方案。实验结果表明,我们的方法在终身编辑任务中具有显著效果,超越了先前的模型编辑技术,同时在批量编辑任务中保持了优异性能。我们的代码将公开提供。