Machine Unlearning has emerged as a significant area of research, focusing on 'removing' specific subsets of data from a trained model. Fine-tuning (FT) methods have become one of the fundamental approaches for approximating unlearning, as they effectively retain model performance. However, it is consistently observed that naive FT methods struggle to forget the targeted data. In this paper, we present the first theoretical analysis of FT methods for machine unlearning within a linear regression framework, providing a deeper exploration of this phenomenon. We investigate two scenarios with distinct features and overlapping features. Our findings reveal that FT models can achieve zero remaining loss yet fail to forget the forgetting data, unlike golden models (trained from scratch without the forgetting data). This analysis reveals that naive FT methods struggle with forgetting because the pretrained model retains information about the forgetting data, and the fine-tuning process has no impact on this retained information. To address this issue, we first propose a theoretical approach to mitigate the retention of forgetting data in the pretrained model. Our analysis shows that removing the forgetting data's influence allows FT models to match the performance of the golden model. Building on this insight, we introduce a discriminative regularization term to practically reduce the unlearning loss gap between the fine-tuned model and the golden model. Our experiments on both synthetic and real-world datasets validate these theoretical insights and demonstrate the effectiveness of the proposed regularization method.
翻译:机器遗忘已成为一个重要的研究领域,其核心在于从训练好的模型中“移除”特定数据子集。微调方法因其能有效保持模型性能,已成为近似实现遗忘的基本方法之一。然而,人们普遍观察到,朴素的微调方法难以遗忘目标数据。本文首次在线性回归框架下对机器遗忘中的微调方法进行了理论分析,以深入探究这一现象。我们研究了具有不同特征和重叠特征的两种场景。研究结果表明,与黄金模型(从头开始训练且不含遗忘数据)不同,微调模型可以达到零剩余损失,却无法遗忘待遗忘数据。该分析揭示,朴素微调方法难以实现遗忘,是因为预训练模型保留了关于遗忘数据的信息,而微调过程对此保留信息没有影响。为解决此问题,我们首先提出了一种理论方法来减轻预训练模型对遗忘数据的保留。分析表明,消除遗忘数据的影响可使微调模型达到与黄金模型相当的性能。基于这一洞见,我们引入了一个判别性正则化项,以实际缩小微调模型与黄金模型之间的遗忘损失差距。在合成数据集和真实数据集上的实验验证了这些理论见解,并证明了所提正则化方法的有效性。