This paper explores Machine Unlearning (MU), an emerging field that is gaining increased attention due to concerns about neural models unintentionally remembering personal or sensitive information. We present SeUL, a novel method that enables selective and fine-grained unlearning for language models. Unlike previous work that employs a fully reversed training objective in unlearning, SeUL minimizes the negative impact on the capability of language models, particularly in terms of generation. Furthermore, we introduce two innovative evaluation metrics, sensitive extraction likelihood (S-EL) and sensitive memorization accuracy (S-MA), specifically designed to assess the effectiveness of forgetting sensitive information. In support of the unlearning framework, we propose efficient automatic online and offline sensitive span annotation methods. The online selection method, based on language probability scores, ensures computational efficiency, while the offline annotation involves a two-stage LLM-based process for robust verification. In summary, this paper contributes a novel selective unlearning method (SeUL), introduces specialized evaluation metrics (S-EL and S-MA) for assessing sensitive information forgetting, and proposes automatic online and offline sensitive span annotation methods to support the overall unlearning framework and evaluation process.
翻译:本文探讨了机器遗忘这一新兴领域,该领域因神经网络模型可能无意中记忆个人或敏感信息而日益受到关注。我们提出了SeUL,一种实现语言模型选择性细粒度遗忘的新方法。与先前工作中采用完全反向训练目标进行遗忘的方法不同,SeUL最大限度地减少了对语言模型能力(特别是生成能力)的负面影响。此外,我们引入了两个创新的评估指标:敏感信息提取似然度(S-EL)和敏感信息记忆准确度(S-MA),专门用于评估敏感信息遗忘的有效性。为支持该遗忘框架,我们提出了高效的自动在线与离线敏感文本片段标注方法。基于语言概率分数的在线选择方法确保了计算效率,而离线标注则采用基于LLM的两阶段流程以实现鲁棒验证。总而言之,本文贡献包括:一种新颖的选择性遗忘方法(SeUL)、用于评估敏感信息遗忘的专用评估指标(S-EL与S-MA),以及支持整体遗忘框架与评估流程的自动在线与离线敏感文本片段标注方法。