Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility out of the scope of unlearning. While interest in studying LLM unlearning is growing,the impact of the optimizer choice for LLM unlearning remains under-explored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between {second-order optimization} and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order unlearning framework, termed SOUL, built upon the second-order clipped stochastic optimization (Sophia)-based LLM training method. SOUL extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, suggesting the promise of second-order optimization in providing a scalable and easily implementable solution for LLM unlearning.
翻译:大型语言模型(LLM)的发展凸显了建立有效遗忘机制的必要性,以符合数据监管要求与伦理人工智能实践。LLM遗忘旨在消除不良数据影响及相关模型能力,同时保持非遗忘范围内的模型效用。尽管对LLM遗忘的研究日益增多,优化器选择对LLM遗忘的影响仍未得到充分探索。本研究首次揭示了优化器选择在LLM遗忘中的重要性,并建立了二阶优化与影响遗忘(一种基于影响函数更新模型以消除数据影响的经典方法)之间的明确联系。这一洞见促使我们开发了一个基于二阶截断随机优化(Sophia)的LLM训练方法的二阶遗忘框架,称为SOUL。SOUL将使用影响遗忘的静态单次模型更新扩展为动态迭代的遗忘过程。我们的大量实验表明,SOUL在各种遗忘任务、模型和评估指标上均持续优于传统的一阶方法,这预示着二阶优化有望为LLM遗忘提供可扩展且易于实现的解决方案。