We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities, while maintaining the integrity of essential knowledge generation and not affecting causally unrelated information. We envision LLM unlearning becoming a pivotal element in the life-cycle management of LLMs, potentially standing as an essential foundation for developing generative AI that is not only safe, secure, and trustworthy, but also resource-efficient without the need of full retraining. We navigate the unlearning landscape in LLMs from conceptual formulation, methodologies, metrics, and applications. In particular, we highlight the often-overlooked aspects of existing LLM unlearning research, e.g., unlearning scope, data-model interaction, and multifaceted efficacy assessment. We also draw connections between LLM unlearning and related areas such as model editing, influence functions, model explanation, adversarial training, and reinforcement learning. Furthermore, we outline an effective assessment framework for LLM unlearning and explore its applications in copyright and privacy safeguards and sociotechnical harm reduction.
翻译:我们探索了大型语言模型(LLMs)领域中的机器遗忘(MU),称之为LLM遗忘。该举措旨在消除不期望的数据影响(例如敏感或非法信息)及其相关的模型能力,同时保持必要知识生成的完整性,且不影响因果无关的信息。我们设想LLM遗忘将成为LLM生命周期管理中的关键要素,并可能作为开发生成式AI的重要基础——这种AI不仅要安全、可靠、值得信赖,还要在无需完全重训练的情况下实现资源高效。我们从概念形式化、方法、指标和应用等方面梳理了LLM中的遗忘研究现状。特别地,我们强调了现有LLM遗忘研究中常被忽视的方面,例如遗忘范围、数据-模型交互以及多维度效能评估。我们还建立了LLM遗忘与模型编辑、影响函数、模型解释、对抗训练及强化学习等相关领域的联系。此外,我们概述了LLM遗忘的有效评估框架,并探索了其在版权与隐私保护以及社会技术危害减少中的应用。