Recent advancements in text-to-image diffusion models have brought them to the public spotlight, becoming widely accessible and embraced by everyday users. However, these models have been shown to generate harmful content such as not-safe-for-work (NSFW) images. While approaches have been proposed to erase such abstract concepts from the models, jail-breaking techniques have succeeded in bypassing such safety measures. In this paper, we propose TraSCE, an approach to guide the diffusion trajectory away from generating harmful content. Our approach is based on negative prompting, but as we show in this paper, conventional negative prompting is not a complete solution and can easily be bypassed in some corner cases. To address this issue, we first propose a modification of conventional negative prompting. Furthermore, we introduce a localized loss-based guidance that enhances the modified negative prompting technique by steering the diffusion trajectory. We demonstrate that our proposed method achieves state-of-the-art results on various benchmarks in removing harmful content including ones proposed by red teams; and erasing artistic styles and objects. Our proposed approach does not require any training, weight modifications, or training data (both image or prompt), making it easier for model owners to erase new concepts.
翻译:近年来,文本到图像扩散模型的发展使其进入公众视野,被广泛访问并为日常用户所接纳。然而,这些模型已被证明会生成有害内容,例如不适合工作场所(NSFW)的图像。尽管已有方法提出从模型中擦除此类抽象概念,但越狱技术已成功绕过了此类安全措施。本文提出TraSCE,一种引导扩散轨迹远离生成有害内容的方法。我们的方法基于负向提示,但正如本文所示,传统负向提示并非完整解决方案,在某些极端情况下容易被绕过。为解决此问题,我们首先提出对传统负向提示的改进。此外,我们引入一种基于局部损失的引导机制,通过调控扩散轨迹来增强改进后的负向提示技术。我们证明,所提出的方法在消除有害内容(包括红队提出的基准测试)以及擦除艺术风格和物体方面,在多个基准测试中均达到了最先进的性能。该方法无需任何训练、权重修改或训练数据(图像或提示),使模型所有者能够更便捷地擦除新概念。