Recent advances in generative models trained on large-scale datasets have made it possible to synthesize high-quality samples across various domains. Moreover, the emergence of strong inversion networks enables not only a reconstruction of real-world images but also the modification of attributes through various editing methods. However, in certain domains related to privacy issues, e.g., human faces, advanced generative models along with strong inversion methods can lead to potential misuses. In this paper, we propose an essential yet under-explored task called generative identity unlearning, which steers the model not to generate an image of a specific identity. In the generative identity unlearning, we target the following objectives: (i) preventing the generation of images with a certain identity, and (ii) preserving the overall quality of the generative model. To satisfy these goals, we propose a novel framework, Generative Unlearning for Any Identity (GUIDE), which prevents the reconstruction of a specific identity by unlearning the generator with only a single image. GUIDE consists of two parts: (i) finding a target point for optimization that un-identifies the source latent code and (ii) novel loss functions that facilitate the unlearning procedure while less affecting the learned distribution. Our extensive experiments demonstrate that our proposed method achieves state-of-the-art performance in the generative machine unlearning task. The code is available at https://github.com/KHU-AGI/GUIDE.
翻译:近期,在大规模数据集上训练的生成模型取得了显著进展,使得跨领域合成高质量样本成为可能。此外,强反演网络的出现不仅能够重构真实世界图像,还可通过多种编辑方法修改属性。然而,在涉及隐私问题的特定领域(如人像),先进的生成模型与强反演方法的结合可能导致潜在滥用。本文提出一项重要但尚未充分探索的任务——生成式身份遗忘,旨在引导模型不再生成特定身份的图像。在生成式身份遗忘中,我们追求以下目标:(i)阻止生成包含特定身份的图像;(ii)保持生成模型的整体质量。为满足这些目标,我们提出一种新颖框架——面向任意身份的生成式遗忘(GUIDE),该方法仅通过单张图像即可实现生成器的身份遗忘。GUIDE包含两个部分:(i)找到优化目标点以消除源潜在编码的身份特征;(ii)设计新型损失函数,在促进遗忘过程的同时尽量减少对已学习分布的影响。大量实验表明,我们提出的方法在生成式机器遗忘任务中达到了最先进的性能。代码已开源:https://github.com/KHU-AGI/GUIDE。