Large Language Models have shown impressive generative capabilities across diverse tasks, but their safety remains a critical concern. Existing post-training alignment methods, such as SFT and RLHF, reduce harmful outputs yet leave LLMs vulnerable to jailbreak attacks, especially advanced optimization-based ones. Recent system-2 approaches enhance safety by adding inference-time reasoning, where models assess potential risks before producing responses. However, we find these methods fail against powerful out-of-distribution jailbreaks, such as AutoDAN-Turbo and Adversarial Reasoning, which conceal malicious goals behind seemingly benign prompts. We observe that all jailbreaks ultimately aim to embed a core malicious intent, suggesting that extracting this intent is key to defense. To this end, we propose ARMOR, which introduces a structured three-step reasoning pipeline: (1) analyze jailbreak strategies from an external, updatable strategy library, (2) extract the core intent, and (3) apply policy-based safety verification. We further develop ARMOR-Think, which decouples safety reasoning from general reasoning to improve both robustness and utility. Evaluations on advanced optimization-based jailbreaks and safety benchmarks show that ARMOR achieves state-of-the-art safety performance, with an average harmful rate of 0.002 and an attack success rate of 0.06 against advanced optimization-based jailbreaks, far below other reasoning-based models. Moreover, ARMOR demonstrates strong generalization to unseen jailbreak strategies, reducing their success rate to zero. These highlight ARMOR's effectiveness in defending against OOD jailbreak attacks, offering a practical path toward secure and reliable LLMs.
翻译:大语言模型已在多种任务中展现出卓越的生成能力,但其安全性仍是关键问题。现有的训练后对齐方法(如SFT和RLHF)虽能减少有害输出,却使LLM在面对越狱攻击时依然脆弱,尤其是基于优化的高级攻击。近期的系统-2方法通过在推理时增加思考步骤来提升安全性,使模型在生成回复前评估潜在风险。然而,我们发现这些方法难以抵御强大的分布外越狱攻击,例如AutoDAN-Turbo和Adversarial Reasoning,此类攻击将恶意目标隐藏在看似无害的提示背后。我们观察到所有越狱攻击最终都旨在嵌入一个核心恶意意图,这表明提取该意图是防御的关键。为此,我们提出ARMOR,引入了一个结构化的三步推理流程:(1)从外部可更新的策略库中分析越狱策略,(2)提取核心意图,(3)应用基于策略的安全性验证。我们进一步开发了ARMOR-Think,将安全性推理与通用推理解耦,以同时提升鲁棒性和实用性。在基于优化的高级越狱攻击和安全性基准测试上的评估表明,ARMOR实现了最先进的安全性能,其平均有害率为0.002,在面对高级优化型越狱攻击时的攻击成功率为0.06,远低于其他基于推理的模型。此外,ARMOR对未见过的越狱策略展现出强大的泛化能力,将其成功率降至零。这些结果凸显了ARMOR在防御OOD越狱攻击方面的有效性,为构建安全可靠的大语言模型提供了一条实用路径。