As large language models (LLMs) become increasingly prevalent and integrated into autonomous systems, ensuring their safety is imperative. Despite significant strides toward safety alignment, recent work GCG~\citep{zou2023universal} proposes a discrete token optimization algorithm and selects the single suffix with the lowest loss to successfully jailbreak aligned LLMs. In this work, we first discuss the drawbacks of solely picking the suffix with the lowest loss during GCG optimization for jailbreaking and uncover the missed successful suffixes during the intermediate steps. Moreover, we utilize those successful suffixes as training data to learn a generative model, named AmpleGCG, which captures the distribution of adversarial suffixes given a harmful query and enables the rapid generation of hundreds of suffixes for any harmful queries in seconds. AmpleGCG achieves near 100\% attack success rate (ASR) on two aligned LLMs (Llama-2-7B-chat and Vicuna-7B), surpassing two strongest attack baselines. More interestingly, AmpleGCG also transfers seamlessly to attack different models, including closed-source LLMs, achieving a 99\% ASR on the latest GPT-3.5. To summarize, our work amplifies the impact of GCG by training a generative model of adversarial suffixes that is universal to any harmful queries and transferable from attacking open-source LLMs to closed-source LLMs. In addition, it can generate 200 adversarial suffixes for one harmful query in only 4 seconds, rendering it more challenging to defend.
翻译:随着大语言模型(LLMs)日益普及并集成到自主系统中,确保其安全性至关重要。尽管在安全对齐方面取得了显著进展,但近期工作GCG~\citep{zou2023universal}提出了一种离散词元优化算法,通过选择损失最低的单个后缀成功破解了对齐的LLMs。本研究首先探讨了在GCG优化过程中仅选取最低损失后缀进行破解的缺陷,并揭示了中间步骤中被遗漏的成功后缀。此外,我们利用这些成功的后缀作为训练数据,学习了一个名为AmpleGCG的生成模型,该模型能捕获针对有害查询的对抗性后缀分布,并能在数秒内为任意有害查询快速生成数百个后缀。AmpleGCG在两个对齐的LLM(Llama-2-7B-chat和Vicuna-7B)上实现了接近100%的攻击成功率(ASR),超越了两个最强的攻击基线。更有趣的是,AmpleGCG还能无缝迁移至攻击不同模型(包括闭源LLM),在最新版GPT-3.5上达到了99%的ASR。总之,我们的工作通过训练一个对抗性后缀生成模型,放大了GCG的影响力——该模型对所有有害查询具有通用性,且能从攻击开源LLM迁移至攻击闭源LLM。此外,它能在仅4秒内为单个有害查询生成200个对抗性后缀,极大增加了防御难度。