The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their widespread adoption but also makes them paradoxically vulnerable. In this paper, we investigate this vulnerability through BreakFun, a jailbreak methodology that weaponizes an LLM's adherence to structured schemas. BreakFun employs a three-part prompt that combines an innocent framing and a Chain-of-Thought distraction with a core "Trojan Schema"--a carefully crafted data structure that compels the model to generate harmful content, exploiting the LLM's strong tendency to follow structures and schemas. We demonstrate this vulnerability is highly transferable, achieving an average success rate of 89% across 13 foundational and proprietary models on JailbreakBench, and reaching a 100% Attack Success Rate (ASR) on several prominent models. A rigorous ablation study confirms this Trojan Schema is the attack's primary causal factor. To counter this, we introduce the Adversarial Prompt Deconstruction guardrail, a defense that utilizes a secondary LLM to perform a "Literal Transcription"--extracting all human-readable text to isolate and reveal the user's true harmful intent. Our proof-of-concept guardrail demonstrates high efficacy against the attack, validating that targeting the deceptive schema is a viable mitigation strategy. Our work provides a look into how an LLM's core strengths can be turned into critical weaknesses, offering a fresh perspective for building more robustly aligned models.
翻译:大型语言模型(LLM)在处理结构化数据和遵循句法规则方面的能力是其获得广泛采用的关键,但也使其在矛盾中变得脆弱。本文通过BreakFun方法探究这一脆弱性,该方法是一种利用LLM对结构化模式的遵循性进行攻击的越狱手段。BreakFun采用包含三部分的提示:将无害的框架描述与思维链干扰相结合,并嵌入核心的“特洛伊模式”——一种精心构建的数据结构,利用LLM强烈遵循结构和模式的倾向,迫使模型生成有害内容。我们证明该脆弱性具有高度可迁移性,在JailbreakBench基准测试中对13个基础模型和专有模型平均达到89%的成功率,并在多个主流模型上实现100%的攻击成功率。严格的消融实验证实特洛伊模式是攻击的主要因果因素。为应对此问题,我们提出“对抗性提示解构”防护机制,该防御方法利用辅助LLM执行“字面转录”——提取所有人类可读文本以隔离并揭示用户真实的恶意意图。我们的概念验证防护机制展现出对该攻击的高效防御能力,证实针对欺骗性模式进行干预是可行的缓解策略。本研究揭示了LLM的核心优势如何转化为关键弱点,为构建更具鲁棒性的对齐模型提供了新的视角。