Although model editing has shown promise in revising knowledge in Large Language Models (LLMs), its impact on the inherent capabilities of LLMs is often overlooked. In this work, we reveal a critical phenomenon: even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks. However, benchmarking LLMs after each edit, while necessary to prevent such collapses, is impractically time-consuming and resource-intensive. To mitigate this, we propose using perplexity as a surrogate metric, validated by extensive experiments demonstrating its strong correlation with downstream tasks performance. We further conduct an in-depth study on sequential editing, a practical setting for real-world scenarios, across various editing methods and LLMs, focusing on hard cases from our previous single edit studies. The results indicate that nearly all examined editing methods result in model collapse after only few edits. To facilitate further research, we have utilized GPT-3.5 to develop a new dataset, HardEdit, based on those hard cases. This dataset aims to establish the foundation for pioneering research in reliable model editing and the mechanisms underlying editing-induced model collapse. We hope this work can draw the community's attention to the potential risks inherent in model editing practices.
翻译:尽管模型编辑在修正大型语言模型(LLMs)知识方面展现出潜力,但其对LLMs固有能力的影响常被忽视。本研究揭示了一个关键现象:即便单次编辑也可能引发模型崩溃,表现为各类基准任务性能显著下降。然而,在每次编辑后对LLMs进行基准测试虽能防止此类崩溃,却因耗时耗资源而难以实际应用。为缓解这一问题,我们提出将困惑度作为替代指标,大量实验验证了其与下游任务性能的高度相关性。我们进一步针对实际场景中常见的顺序编辑设置,在不同编辑方法和LLMs上开展深入研究,重点关注前期单次编辑研究中发现的困难案例。结果表明,几乎所有被考察的编辑方法在仅经过少量编辑后都会导致模型崩溃。为促进进一步研究,我们基于这些困难案例利用GPT-3.5开发了新数据集HardEdit,旨在为可靠模型编辑及编辑引发模型崩溃机制的开创性研究奠定基础。我们期望这项工作能引起学界对模型编辑实践中潜在风险的高度关注。