Recent advances in Diffusion Models have enabled the generation of images from text, with powerful closed-source models like DALL-E and Midjourney leading the way. However, open-source alternatives, such as StabilityAI's Stable Diffusion, offer comparable capabilities. These open-source models, hosted on Hugging Face, come equipped with ethical filter protections designed to prevent the generation of explicit images. This paper reveals first their limitations and then presents a novel text-based safety filter that outperforms existing solutions. Our research is driven by the critical need to address the misuse of AI-generated content, especially in the context of information warfare. DiffGuard enhances filtering efficacy, achieving a performance that surpasses the best existing filters by over 14%.
翻译:近年来,扩散模型在文本到图像生成方面取得了显著进展,以DALL-E和Midjourney为代表的强大闭源模型引领了这一领域的发展。与此同时,开源替代方案(如StabilityAI的Stable Diffusion)也展现出相当的能力。这些托管在Hugging Face平台上的开源模型配备了伦理过滤保护机制,旨在防止生成露骨图像。本文首先揭示了现有过滤机制的局限性,进而提出了一种新型的基于文本的安全过滤器,其性能优于现有解决方案。我们的研究源于应对AI生成内容滥用的迫切需求,特别是在信息战背景下。DiffGuard显著提升了过滤效能,其性能超越现有最佳过滤器超过14%。