Machine unlearning is an emerging technology that has come to attract widespread attention. A number of factors, including regulations and laws, privacy, and usability concerns, have resulted in this need to allow a trained model to forget some of its training data. Existing studies of machine unlearning mainly focus on unlearning requests that forget a cluster of instances or all instances from one class. While these approaches are effective in removing instances, they do not scale to scenarios where partial targets within an instance need to be forgotten. For example, one would like to only unlearn a person from all instances that simultaneously contain the person and other targets. Directly migrating instance-level unlearning to target-level unlearning will reduce the performance of the model after the unlearning process, or fail to erase information completely. To address these concerns, we have proposed a more effective and efficient unlearning scheme that focuses on removing partial targets from the model, which we name "target unlearning". Specifically, we first construct an essential graph data structure to describe the relationships between all important parameters that are selected based on the model explanation method. After that, we simultaneously filter parameters that are also important for the remaining targets and use the pruning-based unlearning method, which is a simple but effective solution to remove information about the target that needs to be forgotten. Experiments with different training models on various datasets demonstrate the effectiveness of the proposed approach.
翻译:机器遗忘作为一种新兴技术已引起广泛关注。法规法律、隐私保护及可用性等多重因素,催生了使训练模型能够遗忘部分训练数据的需求。现有机器遗忘研究主要集中于遗忘实例簇或某类别全部实例的请求。虽然这些方法能有效移除实例,却难以适应需要遗忘实例内部部分目标的场景。例如,用户可能仅需在所有同时包含特定人物与其他目标的实例中遗忘该人物。直接将实例级遗忘迁移至目标级遗忘,将导致遗忘过程后模型性能下降,或无法完全擦除信息。为解决这些问题,我们提出了一种更高效、更有效的遗忘方案,专注于从模型中移除部分目标,并将其命名为"目标遗忘"。具体而言,我们首先构建本质图数据结构,基于模型解释方法筛选重要参数并描述其相互关系;随后同步筛选对剩余目标同样重要的参数,并采用基于剪枝的遗忘方法——这是一种简单而有效的解决方案,能够移除需要遗忘的目标相关信息。在不同数据集上使用多种训练模型进行的实验验证了所提方法的有效性。