Safety alignment of Large Language Models (LLMs) has recently become a critical objective of model developers. In response, a growing body of work has been investigating how safety alignment can be bypassed through various jailbreaking methods, such as adversarial attacks. However, these jailbreak methods can be rather costly or involve a non-trivial amount of creativity and effort, introducing the assumption that malicious users are high-resource or sophisticated. In this paper, we study how simple random augmentations to the input prompt affect safety alignment effectiveness in state-of-the-art LLMs, such as Llama 3 and Qwen 2. We perform an in-depth evaluation of 17 different models and investigate the intersection of safety under random augmentations with multiple dimensions: augmentation type, model size, quantization, fine-tuning-based defenses, and decoding strategies (e.g., sampling temperature). We show that low-resource and unsophisticated attackers, i.e. $\textit{stochastic monkeys}$, can significantly improve their chances of bypassing alignment with just 25 random augmentations per prompt. Source code and data: https://github.com/uiuc-focal-lab/stochastic-monkeys/
翻译:大语言模型(LLM)的安全对齐近期已成为模型开发者的关键目标。对此,越来越多的研究工作开始探索如何通过各种越狱方法(例如对抗性攻击)来绕过安全对齐。然而,这些越狱方法往往成本较高或需要相当的创造力和精力,这隐含地假设恶意用户拥有高资源或高度专业性。本文研究了简单的随机增强对输入提示的影响如何削弱最先进大语言模型(如 Llama 3 和 Qwen 2)的安全对齐效果。我们对 17 个不同模型进行了深入评估,并从多个维度探究了随机增强下的安全性问题:增强类型、模型规模、量化、基于微调的防御机制以及解码策略(例如采样温度)。研究表明,低资源且非专业的攻击者——即 $\textit{随机猴子}$——仅需对每个提示进行 25 次随机增强,即可显著提高绕过对齐的成功率。源代码与数据:https://github.com/uiuc-focal-lab/stochastic-monkeys/