Recent advances in 3D-aware generative models have enabled high-fidelity image synthesis of human identities. However, this progress raises urgent questions around user consent and the ability to remove specific individuals from a model's output space. We address this by introducing SUGAR, a framework for scalable generative unlearning that enables the removal of many identities (simultaneously or sequentially) without retraining the entire model. Rather than projecting unwanted identities to unrealistic outputs or relying on static template faces, SUGAR learns a personalized surrogate latent for each identity, diverting reconstructions to visually coherent alternatives while preserving the model's quality and diversity. We further introduce a continual utility preservation objective that guards against degradation as more identities are forgotten. SUGAR achieves state-of-the-art performance in removing up to 200 identities, while delivering up to a 700% improvement in retention utility compared to existing baselines. Our code is publicly available at https://github.com/judydnguyen/SUGAR-Generative-Unlearn.
翻译:近期,三维感知生成模型的进展实现了对人物身份的高保真图像合成。然而,这一进步引发了关于用户同意及从模型输出空间中移除特定个体能力的紧迫问题。为此,我们提出SUGAR框架,该框架通过可扩展的生成式遗忘机制,能够在无需重新训练整个模型的情况下,实现多身份(同时或顺序)的移除。与将不需要的身份映射至非真实输出或依赖静态模板面部的方法不同,SUGAR为每个身份学习一个个性化的代理潜在表示,将重建结果导向视觉连贯的替代方案,同时保持模型的质量与多样性。我们进一步引入了持续效用保持目标,以防止随着更多身份被遗忘而导致的性能退化。在移除多达200个身份的任务中,SUGAR取得了最先进的性能,同时与现有基线相比,在保留效用方面实现了高达700%的提升。我们的代码已公开于https://github.com/judydnguyen/SUGAR-Generative-Unlearn。