Safety of Large Language Models (LLMs) has become a critical issue given their rapid progresses. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing adversarial prompts to break the aligned LLMs, but optimization of GCG is time-consuming. To reduce the time cost of GCG and enable more comprehensive studies of LLM safety, in this work, we study a new algorithm called $\texttt{Probe sampling}$. At the core of the algorithm is a mechanism that dynamically determines how similar a smaller draft model's predictions are to the target model's predictions for prompt candidates. When the target model is similar to the draft model, we rely heavily on the draft model to filter out a large number of potential prompt candidates. Probe sampling achieves up to $5.6$ times speedup using Llama2-7b-chat and leads to equal or improved attack success rate (ASR) on the AdvBench. Furthermore, probe sampling is also able to accelerate other prompt optimization techniques and adversarial methods, leading to acceleration of $1.8\times$ for AutoPrompt, $2.4\times$ for APE and $2.4\times$ for AutoDAN.
翻译:随着大语言模型(LLM)的快速发展,其安全性已成为一个关键问题。贪婪坐标梯度(GCG)被证明能有效构建对抗性提示以攻破对齐后的大语言模型,但GCG的优化过程耗时严重。为降低GCG的时间成本并支持更全面的大语言模型安全性研究,本文研究了一种名为$\texttt{Probe sampling}$的新算法。该算法的核心机制是动态评估较小草稿模型与目标模型对于候选提示的预测相似度。当目标模型与草稿模型相似时,算法将重度依赖草稿模型来筛选大量潜在候选提示。探针采样在使用Llama2-7b-chat时实现了最高$5.6$倍的加速,并在AdvBench基准上取得相当或更高的攻击成功率(ASR)。此外,探针采样还能加速其他提示优化技术与对抗方法,为AutoPrompt带来$1.8\times$加速,为APE和AutoDAN分别带来$2.4\times$加速。