Editing knowledge in large language models is an attractive capability to have which allows us to correct incorrectly learnt facts during pre-training, as well as update the model with an ever-growing list of new facts. While existing model editing techniques have shown promise, they are usually evaluated using metrics for reliability, specificity and generalization over one or few edits. We argue that for model editing to have practical utility, we must be able to make multiple edits to the same model. With this in mind, we evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT. We find that as the model is edited sequentially with multiple facts, it continually forgets previously edited facts and the ability to perform downstream tasks. This forgetting happens in two phases -- an initial gradual but progressive forgetting phase followed by abrupt or catastrophic forgetting phase. Both gradual and catastrophic forgetting limit the usefulness of model editing methods at scale -- the former making model editing less effective as multiple edits are made to the model while the latter caps the scalability of such model editing methods. Our analysis also highlights other key limitations of ROME and MEMIT at scale. With our work, we push for the development and evaluation of model editing methods keeping scalability in mind.
翻译:在大语言模型中编辑知识是一项极具吸引力的能力,它使我们能够修正预训练期间错误学习的事实,并随着新事实的不断增长而更新模型。尽管现有的模型编辑技术已显示出潜力,但其评估通常仅针对一次或少数几次编辑的可靠性、特异性与泛化性指标。我们认为,要使模型编辑具备实际应用价值,必须能够对同一模型进行多次编辑。基于此,我们对当前模型编辑方法进行了大规模评估,重点关注两种前沿方法:ROME 与 MEMIT。研究发现,当模型被连续编辑多个事实时,会持续遗忘先前编辑的事实及执行下游任务的能力。这种遗忘呈现两个阶段:初始的渐进式持续遗忘阶段,随后是突发的灾难性遗忘阶段。渐进性遗忘与灾难性遗忘均限制了大规模模型编辑方法的实用性——前者使得模型编辑在多次修改后效果递减,后者则制约了此类编辑方法的可扩展性。我们的分析还揭示了ROME与MEMIT在大规模应用中的其他关键局限。通过本研究,我们呼吁在开发与评估模型编辑方法时应将可扩展性作为核心考量。