Recent advancements in text-to-image diffusion models have brought them to the public spotlight, becoming widely accessible and embraced by everyday users. However, these models have been shown to generate harmful content such as not-safe-for-work (NSFW) images. While approaches have been proposed to erase such abstract concepts from the models, jail-breaking techniques have succeeded in bypassing such safety measures. In this paper, we propose TraSCE, an approach to guide the diffusion trajectory away from generating harmful content. Our approach is based on negative prompting, but as we show in this paper, a widely used negative prompting strategy is not a complete solution and can easily be bypassed in some corner cases. To address this issue, we first propose using a specific formulation of negative prompting instead of the widely used one. Furthermore, we introduce a localized loss-based guidance that enhances the modified negative prompting technique by steering the diffusion trajectory. We demonstrate that our proposed method achieves state-of-the-art results on various benchmarks in removing harmful content, including ones proposed by red teams, and erasing artistic styles and objects. Our proposed approach does not require any training, weight modifications, or training data (either image or prompt), making it easier for model owners to erase new concepts.
翻译:近期,文本到图像扩散模型的进展使其进入公众视野,得到广泛普及并被日常用户所接纳。然而,这些模型已被证实能够生成诸如不适宜工作场所(NSFW)图像等有害内容。尽管已有方法被提出以从模型中擦除此类抽象概念,但越狱技术仍能成功绕过此类安全措施。本文提出TraSCE,一种引导扩散轨迹远离生成有害内容的方法。我们的方法基于负向提示,但如本文所示,广泛使用的负向提示策略并非完整解决方案,在某些极端情况下容易被绕过。为解决此问题,我们首先提出采用特定形式的负向提示公式替代广泛使用的方案。此外,我们引入基于局部化损失的引导机制,通过调控扩散轨迹来增强改进的负向提示技术。实验表明,我们提出的方法在消除有害内容(包括红队提出的基准测试)以及擦除艺术风格和物体方面,均取得了最先进的成果。该方法无需任何训练、权重修改或训练数据(图像或提示),使模型所有者能够更便捷地擦除新概念。