Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We expose the inherent fragility of current alignment techniques by proposing a new adversarial prompt attack paradigm: Reasoning Hijacking. To demonstrate this vulnerability, we instantiate it via the Criteria Attack, which subverts model judgments by injecting spurious decision criteria without altering the high-level task goal. Unlike Goal Hijacking, which attempts to override the system prompt, Reasoning Hijacking keeps the task goal intact but manipulates the model's decision-making logic by injecting spurious reasoning shortcuts. Through extensive experiments on three different tasks (toxic comment, negative review, and spam detection), we demonstrate that even state-of-the-art models are highly fragile, consistently prioritizing injected heuristic shortcuts over rigorous semantic analysis. Crucially, because the model's explicit intent remains aligned with the user's instructions, these attacks can bypass defenses designed to detect goal deviation (e.g., SecAlign, StruQ), revealing a fundamental blind spot in the current safety landscape. Data and code are available at https://github.com/Yuan-Hou/criteria_attack.
翻译:当前大型语言模型(LLM)的安全性研究主要聚焦于缓解目标劫持,即防止攻击者重定向模型的高层目标(例如,从“总结邮件”转变为“对用户进行钓鱼”)。在本文中,我们论证了这种视角是不完整的,并强调了推理对齐中的一个关键漏洞。通过提出一种新的对抗性提示攻击范式——推理劫持,我们揭示了当前对齐技术固有的脆弱性。为展示这一漏洞,我们通过标准攻击对其进行实例化,该攻击通过注入虚假的决策标准来颠覆模型判断,同时不改变高层任务目标。与试图覆盖系统提示的目标劫持不同,推理劫持在保持任务目标不变的同时,通过注入虚假的推理捷径来操纵模型的决策逻辑。通过对三个不同任务(有毒评论、负面评论和垃圾邮件检测)进行广泛实验,我们表明即使是最先进的模型也高度脆弱,它们一贯优先选择注入的启发式捷径,而非严谨的语义分析。关键在于,由于模型的显式意图仍与用户指令对齐,这些攻击能够绕过旨在检测目标偏差的防御机制(例如SecAlign、StruQ),从而揭示了当前安全格局中的一个根本盲点。数据和代码可在https://github.com/Yuan-Hou/criteria_attack获取。