Recent advances in image generation models (IGMs), particularly diffusion-based architectures such as Stable Diffusion (SD), have markedly enhanced the quality and diversity of AI-generated visual content. However, their generative capability has also raised significant ethical, legal, and societal concerns, including the potential to produce harmful, misleading, or copyright-infringing content. To mitigate these concerns, machine unlearning (MU) emerges as a promising solution by selectively removing undesirable concepts from pretrained models. Nevertheless, the robustness and effectiveness of existing unlearning techniques remain largely unexplored, particularly in the presence of multi-modal adversarial inputs. To bridge this gap, we propose Recall, a novel adversarial framework explicitly designed to compromise the robustness of unlearned IGMs. Unlike existing approaches that predominantly rely on adversarial text prompts, Recall exploits the intrinsic multi-modal conditioning capabilities of diffusion models by efficiently optimizing adversarial image prompts with guidance from a single semantically relevant reference image. Extensive experiments across ten state-of-the-art unlearning methods and diverse tasks show that Recall consistently outperforms existing baselines in terms of adversarial effectiveness, computational efficiency, and semantic fidelity with the original textual prompt. These findings reveal critical vulnerabilities in current unlearning mechanisms and underscore the need for more robust solutions to ensure the safety and reliability of generative models. Code and data are publicly available at \textcolor{blue}{https://github.com/ryliu68/RECALL}.
翻译:近年来,图像生成模型(IGMs)——尤其是基于扩散的架构(如Stable Diffusion)——的进展显著提升了AI生成视觉内容的质量与多样性。然而,其生成能力也引发了重大的伦理、法律和社会关切,包括可能产生有害、误导性或侵犯版权的内容。为缓解这些担忧,机器遗忘(MU)作为一种有前景的解决方案应运而生,它能够从预训练模型中选择性移除不良概念。然而,现有遗忘技术的鲁棒性和有效性在很大程度上仍未得到充分探究,尤其是在面对多模态对抗性输入时。为填补这一空白,我们提出了Recall,一种专门设计用于破坏已遗忘IGMs鲁棒性的新型对抗框架。与现有主要依赖对抗性文本提示的方法不同,Recall利用扩散模型固有的多模态条件生成能力,通过单个语义相关的参考图像引导,高效优化对抗性图像提示。在十种最先进的遗忘方法和多样化任务上进行的大量实验表明,Recall在对抗效果、计算效率以及与原始文本提示的语义保真度方面均一致优于现有基线方法。这些发现揭示了当前遗忘机制中的关键脆弱性,并强调了开发更鲁棒的解决方案以确保生成模型安全性与可靠性的迫切需求。代码与数据已公开于 \textcolor{blue}{https://github.com/ryliu68/RECALL}。