While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed jailbreaks. In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM. TAP utilizes an attacker LLM to iteratively refine candidate (attack) prompts until one of the refined prompts jailbreaks the target. In addition, before sending prompts to the target, TAP assesses them and prunes the ones unlikely to result in jailbreaks, reducing the number of queries sent to the target LLM. In empirical evaluations, we observe that TAP generates prompts that jailbreak state-of-the-art LLMs (including GPT4-Turbo and GPT4o) for more than 80% of the prompts. This significantly improves upon the previous state-of-the-art black-box methods for generating jailbreaks while using a smaller number of queries than them. Furthermore, TAP is also capable of jailbreaking LLMs protected by state-of-the-art guardrails, e.g., LlamaGuard.
翻译:尽管大语言模型(LLM)展现出多样化的功能,但它们仍会生成有害、偏见和有毒的内容,这一点通过人为设计的越狱攻击的普遍存在得以证明。在本研究中,我们提出了带剪枝的攻击树(Tree of Attacks with Pruning, TAP),这是一种自动生成越狱攻击的方法,仅需对目标大语言模型进行黑盒访问。TAP利用一个攻击者大语言模型迭代地优化候选(攻击)提示,直到其中一个优化后的提示成功越狱目标模型。此外,在将提示发送给目标模型之前,TAP会对它们进行评估,并剪枝掉那些不太可能导致越狱的提示,从而减少发送给目标大语言模型的查询次数。在实证评估中,我们观察到TAP生成的提示能够对超过80%的提示成功越狱最先进的大语言模型(包括GPT4-Turbo和GPT4o)。这显著优于先前最先进的黑盒越狱生成方法,同时使用的查询次数更少。此外,TAP还能够越狱受最先进防护机制(例如LlamaGuard)保护的大语言模型。