Multi-turn jailbreak attacks have emerged as a critical threat to Large Language Models (LLMs), bypassing safety mechanisms by progressively constructing adversarial contexts from scratch and incrementally refining prompts. However, existing methods suffer from the inefficiency of incremental context construction that requires step-by-step LLM interaction, and often stagnate in suboptimal regions due to surface-level optimization. In this paper, we characterize the Intent-Context Coupling phenomenon, revealing that LLM safety constraints are significantly relaxed when a malicious intent is coupled with a semantically congruent context pattern. Driven by this insight, we propose ICON, an automated multi-turn jailbreak framework that efficiently constructs an authoritative-style context via prior-guided semantic routing. Specifically, ICON first routes the malicious intent to a congruent context pattern (e.g., Scientific Research) and instantiates it into an attack prompt sequence. This sequence progressively builds the authoritative-style context and ultimately elicits prohibited content. In addition, ICON incorporates a Hierarchical Optimization Strategy that combines local prompt refinement with global context switching, preventing the attack from stagnating in ineffective contexts. Experimental results across eight SOTA LLMs demonstrate the effectiveness of ICON, achieving a state-of-the-art average Attack Success Rate (ASR) of 97.1\%. Code is available at https://github.com/xwlin-roy/ICON.
翻译:多轮越狱攻击已成为大型语言模型(LLMs)面临的关键威胁,其通过从零开始逐步构建对抗性上下文并迭代优化提示,从而绕过安全机制。然而,现有方法存在增量式上下文构建效率低下的问题,需要逐步与LLM交互,且常因表层优化而陷入次优区域停滞。本文揭示了意图-上下文耦合现象,发现当恶意意图与语义契合的上下文模式耦合时,LLM的安全约束会显著放松。基于此洞见,我们提出ICON——一种自动化的多轮越狱框架,通过先验引导的语义路由高效构建权威式上下文。具体而言,ICON首先将恶意意图路由至契合的上下文模式(如科学研究),并将其实例化为攻击提示序列。该序列逐步构建权威式上下文,最终诱导出被禁止的内容。此外,ICON采用分层优化策略,结合局部提示优化与全局上下文切换,防止攻击在无效上下文中停滞。在八个前沿LLM上的实验结果表明,ICON具有显著有效性,达到了97.1%的平均攻击成功率(ASR),刷新了当前最优性能。代码发布于https://github.com/xwlin-roy/ICON。