Large language models (LLMs) may memorize sensitive or copyrighted content, raising privacy and legal concerns. Due to the high cost of retraining from scratch, researchers attempt to employ machine unlearning to remove specific content from LLMs while preserving the overall performance. In this paper, we discuss several issues in machine unlearning for LLMs and provide our insights on possible approaches. To address the issue of inadequate evaluation of model outputs after unlearning, we introduce three additional metrics to evaluate token diversity, sentence semantics, and factual correctness. We then categorize unlearning methods into untargeted and targeted, and discuss their issues respectively. Specifically, the behavior that untargeted unlearning attempts to approximate is unpredictable and may involve hallucinations, and existing regularization is insufficient for targeted unlearning. To alleviate these issues, we propose using the objective of maximizing entropy (ME) for untargeted unlearning and incorporate answer preservation (AP) loss as regularization for targeted unlearning. Experimental results across three scenarios, i.e., fictitious unlearning, continual unlearning, and real-world unlearning, demonstrate the effectiveness of our approaches. The code is available at https://github.com/sail-sg/closer-look-LLM-unlearning.
翻译:大型语言模型(LLMs)可能记忆敏感或受版权保护的内容,引发隐私和法律担忧。鉴于从头开始重新训练的高昂成本,研究者尝试采用机器遗忘技术从LLMs中移除特定内容,同时保持模型的整体性能。本文探讨了LLMs机器遗忘中的若干问题,并就可能的解决路径提出见解。针对遗忘后模型输出评估不足的问题,我们引入了三个补充指标来评估词汇多样性、句子语义和事实准确性。随后将遗忘方法分为非定向遗忘与定向遗忘两类,并分别讨论其问题。具体而言,非定向遗忘试图逼近的行为具有不可预测性且可能涉及幻觉生成,而现有正则化方法对定向遗忘而言尚不充分。为缓解这些问题,我们提出采用最大化熵(ME)目标进行非定向遗忘,并引入答案保持(AP)损失作为定向遗忘的正则化项。在虚构遗忘、持续遗忘和现实场景遗忘三种情境下的实验结果表明了所提方法的有效性。代码已发布于 https://github.com/sail-sg/closer-look-LLM-unlearning。