Jailbreak attacks in large language models (LLMs) entail inducing the models to generate content that breaches ethical and legal norm through the use of malicious prompts, posing a substantial threat to LLM security. Current strategies for jailbreak attack and defense often focus on optimizing locally within specific algorithmic frameworks, resulting in ineffective optimization and limited scalability. In this paper, we present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques, generalizing them to all possible attack surfaces. We employ directed acyclic graphs (DAGs) to position and analyze existing jailbreak attacks, defenses, and evaluation methodologies, and propose three comprehensive, automated, and logical frameworks. \texttt{AutoAttack} investigates dependencies in two lines of jailbreak optimization strategies: genetic algorithm (GA)-based attacks and adversarial-generation-based attacks, respectively. We then introduce an ensemble jailbreak attack to exploit these dependencies. \texttt{AutoDefense} offers a mixture-of-defenders approach by leveraging the dependency relationships in pre-generative and post-generative defense strategies. \texttt{AutoEvaluation} introduces a novel evaluation method that distinguishes hallucinations, which are often overlooked, from jailbreak attack and defense responses. Through extensive experiments, we demonstrate that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
翻译:大语言模型中的越狱攻击是指通过恶意提示诱导模型生成违反伦理与法律规范的内容,对LLM安全构成重大威胁。当前越狱攻击与防御策略常局限于特定算法框架内的局部优化,导致优化效率低下且可扩展性有限。本文对越狱攻击与防御技术中的依赖关系进行系统性分析,并将其推广至所有可能的攻击面。我们采用有向无环图对现有越狱攻击、防御及评估方法进行定位与分析,并提出三种全面、自动化且逻辑严密的框架。AutoAttack分别探究了两类越狱优化策略中的依赖关系:基于遗传算法的攻击与基于对抗生成的攻击,进而提出集成越狱攻击以利用这些依赖关系。AutoDefense通过利用预生成与后生成防御策略中的依赖关系,提出了一种混合防御方法。AutoEvaluation引入了一种新型评估方法,能够区分常被忽视的幻觉现象与越狱攻击/防御响应。通过大量实验证明,所提出的集成越狱攻击与防御框架显著优于现有研究。