Large Language Models (LLMs) are susceptible to generating harmful content when prompted with carefully crafted inputs, a vulnerability known as LLM jailbreaking. As LLMs become more powerful, studying jailbreak methods is critical to enhancing security and aligning models with human values. Traditionally, jailbreak techniques have relied on suffix addition or prompt templates, but these methods suffer from limited attack diversity. This paper introduces DiffusionAttacker, an end-to-end generative approach for jailbreak rewriting inspired by diffusion models. Our method employs a sequence-to-sequence (seq2seq) text diffusion model as a generator, conditioning on the original prompt and guiding the denoising process with a novel attack loss. Unlike previous approaches that use autoregressive LLMs to generate jailbreak prompts, which limit the modification of already generated tokens and restrict the rewriting space, DiffusionAttacker utilizes a seq2seq diffusion model, allowing more flexible token modifications. This approach preserves the semantic content of the original prompt while producing harmful content. Additionally, we leverage the Gumbel-Softmax technique to make the sampling process from the diffusion model's output distribution differentiable, eliminating the need for iterative token search. Extensive experiments on Advbench and Harmbench demonstrate that DiffusionAttacker outperforms previous methods across various evaluation metrics, including attack success rate (ASR), fluency, and diversity.
翻译:大语言模型在面对精心设计的输入提示时容易生成有害内容,这一安全漏洞被称为LLM越狱。随着大语言模型能力日益增强,研究越狱方法对于增强模型安全性、促进模型与人类价值观对齐至关重要。传统越狱技术主要依赖后缀添加或提示模板,但这些方法存在攻击多样性不足的缺陷。本文提出DiffusionAttacker——一种受扩散模型启发的端到端生成式越狱重写方法。该方法采用序列到序列文本扩散模型作为生成器,以原始提示为条件,并通过新颖的攻击损失函数引导去噪过程。与以往使用自回归大语言模型生成越狱提示的方法(该方法限制对已生成标记的修改并约束重写空间)不同,DiffusionAttacker利用序列到序列扩散模型实现了更灵活的标记修改。该方法在保持原始提示语义内容的同时生成有害内容。此外,我们采用Gumbel-Softmax技术使扩散模型输出分布的采样过程可微分,从而无需迭代式标记搜索。在Advbench和Harmbench数据集上的大量实验表明,DiffusionAttacker在攻击成功率、流畅性和多样性等多项评估指标上均优于现有方法。