There is growing concern over the safety of powerful diffusion models (DMs), as they are often misused to produce inappropriate, not-safe-for-work (NSFW) content or generate copyrighted material or data of individuals who wish to be forgotten. Many existing methods tackle these issues by heavily relying on text-based negative prompts or extensively retraining DMs to eliminate certain features or samples. In this paper, we take a radically different approach, directly modifying the sampling trajectory by leveraging a negation set (e.g., unsafe images, copyrighted data, or datapoints needed to be excluded) to avoid specific regions of data distribution, without needing to retrain or fine-tune DMs. We formally derive the relationship between the expected denoised samples that are safe and those that are not safe, leading to our $\textit{safe}$ denoiser which ensures its final samples are away from the area to be negated. Inspired by the derivation, we develop a practical algorithm that successfully produces high-quality samples while avoiding negation areas of the data distribution in text-conditional, class-conditional, and unconditional image generation scenarios. These results hint at the great potential of our training-free safe denoiser for using DMs more safely.
翻译:随着强大扩散模型(DMs)的日益普及,其安全性问题引发广泛担忧,这些模型常被滥用于生成不当的、不适合工作场所(NSFW)的内容,或产生受版权保护的材料及个人希望被遗忘的数据。现有方法多通过严重依赖基于文本的负面提示词,或对扩散模型进行大量重新训练以消除特定特征或样本来应对这些问题。本文采取一种根本不同的路径,直接通过利用否定集(例如不安全图像、受版权保护数据或需排除的数据点)来修改采样轨迹,从而避免数据分布的特定区域,且无需对扩散模型进行重新训练或微调。我们形式化推导了预期安全去噪样本与非安全样本之间的关系,由此提出$\textit{安全}$去噪器,确保其最终生成的样本远离待否定的区域。受此推导启发,我们开发了一种实用算法,该算法在文本条件、类别条件及无条件图像生成场景中,成功生成高质量样本的同时有效避开了数据分布的否定区域。这些结果表明,我们无需训练的安全去噪器在更安全地使用扩散模型方面具有巨大潜力。