Large language models (LLMs) remain vulnerable to multi-turn jailbreaking attacks that exploit conversational context to bypass safety constraints gradually. These attacks target different harm categories through distinct conversational approaches. Existing multi-turn methods often rely on heuristic or ad hoc exploration strategies, providing limited insight into underlying model weaknesses. The relationship between conversation patterns and model vulnerabilities across harm categories remains poorly understood. We propose Pattern Enhanced Chain of Attack (PE-CoA), a framework of five conversation patterns to construct multi-turn jailbreaks through natural dialogue. Evaluating PE-CoA on twelve LLMs spanning ten harm categories, we achieve state-of-the-art performance, uncovering pattern-specific vulnerabilities and LLM behavioral characteristics: models exhibit distinct weakness profiles, defense to one pattern does not generalize to others, and model families share similar failure modes. These findings highlight limitations of safety training and indicate the need for pattern-aware defenses. Code available on: https://github.com/Ragib-Amin-Nihal/PE-CoA
翻译:大型语言模型(LLMs)仍然容易受到多轮越狱攻击,此类攻击通过利用对话上下文逐步绕过安全约束。这些攻击通过不同的对话策略针对不同的危害类别。现有的多轮攻击方法通常依赖于启发式或临时探索策略,对底层模型弱点的洞察有限。对话模式与跨危害类别的模型脆弱性之间的关系仍缺乏深入理解。我们提出了模式增强攻击链(PE-CoA),这是一个包含五种对话模式的框架,用于通过自然对话构建多轮越狱攻击。通过在涵盖十个危害类别的十二个LLMs上评估PE-CoA,我们实现了最先进的性能,揭示了模式特定的脆弱性和LLM行为特征:模型表现出不同的弱点分布,对一种模式的防御无法推广到其他模式,且同一模型家族具有相似的失效模式。这些发现凸显了安全训练的局限性,并表明需要开发模式感知的防御机制。代码发布于:https://github.com/Ragib-Amin-Nihal/PE-CoA