Recent advancements in jailbreaking large language models (LLMs), such as AutoDAN-Turbo, have demonstrated the power of automated strategy discovery. AutoDAN-Turbo employs a lifelong learning agent to build a rich library of attack strategies from scratch. While highly effective, its test-time generation process involves sampling a strategy and generating a single corresponding attack prompt, which may not fully exploit the potential of the learned strategy library. In this paper, we propose to further improve the attack performance of AutoDAN-Turbo through test-time scaling. We introduce two distinct scaling methods: Best-of-N and Beam Search. The Best-of-N method generates N candidate attack prompts from a sampled strategy and selects the most effective one based on a scorer model. The Beam Search method conducts a more exhaustive search by exploring combinations of strategies from the library to discover more potent and synergistic attack vectors. According to the experiments, the proposed methods significantly boost performance, with Beam Search increasing the attack success rate by up to 15.6 percentage points on Llama-3.1-70B-Instruct and achieving a nearly 60\% relative improvement against the highly robust GPT-o4-mini compared to the vanilla method.
翻译:近期在大型语言模型(LLM)越狱攻击方面的进展,例如AutoDAN-Turbo,展示了自动化策略发现的强大能力。AutoDAN-Turbo采用终身学习智能体,从零开始构建了一个丰富的攻击策略库。尽管该方法非常有效,但其测试时生成过程仅涉及采样一个策略并生成一个对应的攻击提示,这可能未能充分利用已学习策略库的全部潜力。在本文中,我们提出通过测试时扩展来进一步提升AutoDAN-Turbo的攻击性能。我们引入了两种不同的扩展方法:最佳N选一(Best-of-N)和束搜索(Beam Search)。最佳N选一方法从一个采样的策略生成N个候选攻击提示,并基于一个评分模型选择最有效的一个。束搜索方法则通过探索策略库中的策略组合来进行更详尽的搜索,以发现更强大且具有协同效应的攻击向量。根据实验,所提出的方法显著提升了性能,其中束搜索方法在Llama-3.1-70B-Instruct上将攻击成功率提高了多达15.6个百分点,并且与原始方法相比,针对高度鲁棒的GPT-o4-mini模型实现了近60%的相对性能提升。