Text-to-image (T2I) models, though exhibiting remarkable creativity in image generation, can be exploited to produce unsafe images. Existing safety measures, e.g., content moderation or model alignment, fail in the presence of white-box adversaries who know and can adjust model parameters, e.g., by fine-tuning. This paper presents a novel defensive framework, named Patronus, which equips T2I models with holistic protection to defend against white-box adversaries. Specifically, we design an internal moderator that decodes unsafe input features into zero vectors while ensuring the decoding performance of benign input features. Furthermore, we strengthen the model alignment with a carefully designed non-fine-tunable learning mechanism, ensuring the T2I model will not be compromised by malicious fine-tuning. We conduct extensive experiments to validate the intactness of the performance on safe content generation and the effectiveness of rejecting unsafe content generation. Results also confirm the resilience of Patronus against various fine-tuning attacks by white-box adversaries.
翻译:文本到图像(T2I)模型虽然在图像生成方面展现出卓越的创造力,但可能被滥用以生成不安全图像。现有的安全措施(例如内容审核或模型对齐)在面对了解并能调整模型参数(例如通过微调)的白盒对抗者时会失效。本文提出了一种新颖的防御框架,命名为Patronus,该框架为T2I模型提供整体保护以抵御白盒对抗攻击。具体而言,我们设计了一个内部调节器,将不安全的输入特征解码为零向量,同时确保良性输入特征的解码性能。此外,我们通过精心设计的不可微调学习机制来加强模型对齐,确保T2I模型不会被恶意微调所破坏。我们进行了大量实验,以验证其在安全内容生成上的性能完整性以及拒绝不安全内容生成的有效性。结果也证实了Patronus能够抵御白盒对抗者发起的各种微调攻击。