Large Reasoning Models (LRMs) demonstrate remarkable capabilities on complex reasoning tasks but remain vulnerable to severe safety risks, including harmful content generation and jailbreak attacks. Existing mitigation strategies rely on injecting heuristic safety signals during training, which often suppress reasoning ability and fail to resolve the safety-reasoning trade-off. To systematically investigate this issue, we analyze the reasoning trajectories of diverse LRMs and uncover a phenomenon we term Self-Jailbreak, where models override their own risk assessments and justify responding to unsafe prompts. This finding reveals that LRMs inherently possess the ability to reject unsafe queries, but this ability is compromised, resulting in harmful outputs. Building on these insights, we propose the Chain-of-Guardrail (CoG), a training framework that recomposes or backtracks unsafe reasoning steps, steering the model back onto safe trajectories while preserving valid reasoning chains. Extensive experiments across multiple reasoning and safety benchmarks demonstrate that CoG substantially improves the safety of current LRMs while preserving comparable reasoning ability, significantly outperforming prior methods that suffer from severe safety-reasoning trade-offs.
翻译:大型推理模型(LRMs)在复杂推理任务上展现出卓越能力,但仍面临严重的安全风险,包括有害内容生成和越狱攻击。现有缓解策略依赖于在训练过程中注入启发式安全信号,这通常会抑制推理能力,且无法解决安全与推理之间的权衡问题。为系统研究此问题,我们分析了多种LRMs的推理轨迹,发现了一种称为"自我越狱"的现象:模型会推翻自身的风险评估,并为响应不安全提示进行辩护。这一发现表明LRMs本质上具备拒绝不安全查询的能力,但该能力存在缺陷,导致有害输出。基于这些洞察,我们提出了链式护栏(CoG)训练框架,通过重组或回溯不安全的推理步骤,将模型引导回安全轨迹,同时保留有效的推理链。在多个推理与安全基准测试上的广泛实验表明,CoG在保持可比推理能力的同时,显著提升了现有LRMs的安全性,其性能远超先前那些受困于严重安全-推理权衡的方法。