We introduce Best-of-N (BoN) Jailbreaking, a simple black-box algorithm that jailbreaks frontier AI systems across modalities. BoN Jailbreaking works by repeatedly sampling variations of a prompt with a combination of augmentations - such as random shuffling or capitalization for textual prompts - until a harmful response is elicited. We find that BoN Jailbreaking achieves high attack success rates (ASRs) on closed-source language models, such as 89% on GPT-4o and 78% on Claude 3.5 Sonnet when sampling 10,000 augmented prompts. Further, it is similarly effective at circumventing state-of-the-art open-source defenses like circuit breakers. BoN also seamlessly extends to other modalities: it jailbreaks vision language models (VLMs) such as GPT-4o and audio language models (ALMs) like Gemini 1.5 Pro, using modality-specific augmentations. BoN reliably improves when we sample more augmented prompts. Across all modalities, ASR, as a function of the number of samples (N), empirically follows power-law-like behavior for many orders of magnitude. BoN Jailbreaking can also be composed with other black-box algorithms for even more effective attacks - combining BoN with an optimized prefix attack achieves up to a 35% increase in ASR. Overall, our work indicates that, despite their capability, language models are sensitive to seemingly innocuous changes to inputs, which attackers can exploit across modalities.
翻译:本文提出最优N次(BoN)越狱攻击,这是一种简单的黑盒算法,能够跨模态破解前沿人工智能系统。BoN越狱攻击通过重复采样经过多种增强组合处理的提示变体(例如对文本提示进行随机重排或大小写变换)来实施,直至诱发出有害响应。研究发现,在采样10,000个增强提示时,该攻击对闭源语言模型实现了高攻击成功率(ASR),例如GPT-4o达到89%,Claude 3.5 Sonnet达到78%。此外,该方法在规避最先进的开源防御机制(如电路断路器)方面同样有效。BoN还能无缝扩展至其他模态:通过模态特异性增强技术,可成功破解视觉语言模型(如GPT-4o)和音频语言模型(如Gemini 1.5 Pro)。随着采样增强提示数量的增加,BoN攻击效果持续提升。在所有模态中,攻击成功率随采样次数(N)的变化经验性地呈现跨越多个数量级的幂律行为特征。BoN越狱攻击还可与其他黑盒算法组合使用以实现更强攻击效果——将BoN与优化前缀攻击结合可使攻击成功率提升高达35%。总体而言,本研究揭示:尽管语言模型具备强大能力,但其对输入中看似无害的改动具有敏感性,攻击者可利用这一特性实施跨模态攻击。