Due to the remarkable generative potential of diffusion-based models, numerous researches have investigated jailbreak attacks targeting these frameworks. A particularly concerning threat within image models is the generation of Not-Safe-for-Work (NSFW) content. Despite the implementation of security filters, numerous efforts continue to explore ways to circumvent these safeguards. Current attack methodologies primarily encompass adversarial prompt engineering or concept obfuscation, yet they frequently suffer from slow search efficiency, conspicuous attack characteristics and poor alignment with targets. To overcome these challenges, we propose Antelope, a more robust and covert jailbreak attack strategy designed to expose security vulnerabilities inherent in generative models. Specifically, Antelope leverages the confusion of sensitive concepts with similar ones, facilitates searches in the semantically adjacent space of these related concepts and aligns them with the target imagery, thereby generating sensitive images that are consistent with the target and capable of evading detection. Besides, we successfully exploit the transferability of model-based attacks to penetrate online black-box services. Experimental evaluations demonstrate that Antelope outperforms existing baselines across multiple defensive mechanisms, underscoring its efficacy and versatility.
翻译:基于扩散模型的卓越生成潜力,众多研究已针对此类框架的越狱攻击展开调查。图像模型中一个尤为令人担忧的威胁是不适宜工作场所(NSFW)内容的生成。尽管已部署安全过滤器,大量工作仍在持续探索规避这些防护措施的方法。当前的攻击方法主要包括对抗性提示工程或概念混淆,但它们常面临搜索效率低下、攻击特征明显以及与目标对齐性差等问题。为克服这些挑战,我们提出羚羊,一种更鲁棒且隐蔽的越狱攻击策略,旨在揭示生成模型中固有的安全漏洞。具体而言,羚羊利用敏感概念与相似概念的混淆,促进在这些相关概念的语义邻近空间中进行搜索,并将其与目标图像对齐,从而生成与目标一致且能够规避检测的敏感图像。此外,我们成功利用基于模型攻击的可迁移性渗透了在线黑盒服务。实验评估表明,羚羊在多种防御机制下均优于现有基线,凸显了其有效性和多功能性。