Large Language Models (LLMs) are susceptible to generating harmful content when prompted with carefully crafted inputs, a vulnerability known as LLM jailbreaking. As LLMs become more powerful, studying jailbreak methods is critical to enhancing security and aligning models with human values. Traditionally, jailbreak techniques have relied on suffix addition or prompt templates, but these methods suffer from limited attack diversity. This paper introduces DiffusionAttacker, an end-to-end generative approach for jailbreak rewriting inspired by diffusion models. Our method employs a sequence-to-sequence (seq2seq) text diffusion model as a generator, conditioning on the original prompt and guiding the denoising process with a novel attack loss. Unlike previous approaches that use autoregressive LLMs to generate jailbreak prompts, which limit the modification of already generated tokens and restrict the rewriting space, DiffusionAttacker utilizes a seq2seq diffusion model, allowing more flexible token modifications. This approach preserves the semantic content of the original prompt while producing harmful content. Additionally, we leverage the Gumbel-Softmax technique to make the sampling process from the diffusion model's output distribution differentiable, eliminating the need for iterative token search. Extensive experiments on Advbench and Harmbench demonstrate that DiffusionAttacker outperforms previous methods across various evaluation metrics, including attack success rate (ASR), fluency, and diversity.
翻译:大语言模型(LLMs)在接收到精心构造的输入提示时容易生成有害内容,这一漏洞被称为LLM越狱。随着LLM能力日益强大,研究越狱方法对于增强安全性并使模型与人类价值观对齐至关重要。传统的越狱技术依赖于后缀添加或提示模板,但这些方法存在攻击多样性不足的问题。本文提出DiffusionAttacker,一种受扩散模型启发的、用于越狱重写的端到端生成方法。我们的方法采用序列到序列(seq2seq)文本扩散模型作为生成器,以原始提示为条件,并通过新颖的攻击损失引导去噪过程。与以往使用自回归LLM生成越狱提示的方法不同(这些方法限制了已生成令牌的修改并约束了重写空间),DiffusionAttacker利用seq2seq扩散模型,允许更灵活的令牌修改。该方法在保持原始提示语义内容的同时生成有害内容。此外,我们采用Gumbel-Softmax技术使扩散模型输出分布的采样过程可微分,从而无需进行迭代令牌搜索。在Advbench和Harmbench上的大量实验表明,DiffusionAttacker在攻击成功率(ASR)、流畅性和多样性等多种评估指标上均优于现有方法。