We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities, while maintaining the integrity of essential knowledge generation and not affecting causally unrelated information. We envision LLM unlearning becoming a pivotal element in the life-cycle management of LLMs, potentially standing as an essential foundation for developing generative AI that is not only safe, secure, and trustworthy, but also resource-efficient without the need of full retraining. We navigate the unlearning landscape in LLMs from conceptual formulation, methodologies, metrics, and applications. In particular, we highlight the often-overlooked aspects of existing LLM unlearning research, e.g., unlearning scope, data-model interaction, and multifaceted efficacy assessment. We also draw connections between LLM unlearning and related areas such as model editing, influence functions, model explanation, adversarial training, and reinforcement learning. Furthermore, we outline an effective assessment framework for LLM unlearning and explore its applications in copyright and privacy safeguards and sociotechnical harm reduction.
翻译:我们探索了大型语言模型(LLMs)领域的机器遗忘(MU),即LLM遗忘。这一研究旨在消除不良数据影响(例如敏感或非法信息)及其相关的模型能力,同时保持核心知识生成的完整性,且不影响因果无关的信息。我们设想LLM遗忘将成为LLM生命周期管理的关键组成部分,并可能成为开发生成式人工智能的重要基础——不仅确保其安全、可靠和可信,而且无需完全重新训练即可实现资源高效利用。我们从概念框架、方法、指标和应用等方面梳理了LLMs中的遗忘研究格局。特别地,我们强调了现有LLM遗忘研究中常被忽视的方面,例如遗忘范围、数据-模型交互以及多维度效能评估。我们还建立了LLM遗忘与相关领域(如模型编辑、影响函数、模型解释、对抗性训练和强化学习)之间的联系。此外,我们概述了一个有效的LLM遗忘评估框架,并探讨了其在版权与隐私保护以及减少社会技术危害方面的应用。