Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including through the use of adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM's behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets. As a result, we are able to completely eliminate GPT's safety alignment in a blackbox setting through finetuning with only benign data. Our code and data is available at \url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-features}.
翻译:尽管在安全对齐方面已付出大量努力,但诸如GPT-4和LLaMA 3等大型语言模型(LLMs)仍易受越狱攻击的影响,这些攻击可能诱发有害行为,包括通过使用对抗性后缀。基于先前研究,我们假设这些对抗性后缀并非单纯的缺陷,而可能代表了能够主导LLM行为的特征。为验证这一假设,我们进行了多项实验。首先,我们证明良性特征可被有效转化为对抗性后缀,即我们开发了一种特征提取方法,从良性数据集中提取与样本无关的后缀形式特征,并证明这些后缀可能有效破坏安全对齐。其次,我们表明由越狱攻击生成的对抗性后缀可能包含有意义的特征,即将相同后缀附加到不同提示后,生成的响应会呈现特定特征。第三,我们证明此类良性但破坏安全的特征可通过仅使用良性数据集的微调轻松引入。因此,我们能够在黑盒环境中通过仅使用良性数据的微调,完全消除GPT的安全对齐。我们的代码和数据可在\url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-feature}获取。