Conventional language model (LM) safety alignment relies on a reactive, disjoint procedure: attackers exploit a static model, followed by defensive fine-tuning to patch exposed vulnerabilities. This sequential approach creates a mismatch -- attackers overfit to obsolete defenses, while defenders perpetually lag behind emerging threats. To address this, we propose Self-RedTeam, an online self-play reinforcement learning algorithm where an attacker and defender agent co-evolve through continuous interaction. We cast safety alignment as a two-player zero-sum game, where a single model alternates between attacker and defender roles -- generating adversarial prompts and safeguarding against them -- while a reward LM adjudicates outcomes. This enables dynamic co-adaptation. Grounded in the game-theoretic framework of zero-sum games, we establish a theoretical safety guarantee which motivates the design of our method: if self-play converges to a Nash Equilibrium, the defender will reliably produce safe responses to any adversarial input. Empirically, Self-RedTeam uncovers more diverse attacks (+21.8% SBERT) compared to attackers trained against static defenders and achieves higher robustness on safety benchmarks (e.g., +65.5% on WildJailBreak) than defenders trained against static attackers. We further propose hidden Chain-of-Thought, allowing agents to plan privately, which boosts adversarial diversity and reduces over-refusals. Our results motivate a shift from reactive patching to proactive co-evolution in LM safety training, enabling scalable, autonomous, and robust self-improvement of LMs via multi-agent reinforcement learning (MARL).
翻译:传统的语言模型(LM)安全对齐依赖于一种被动的、分离的流程:攻击者利用静态模型进行攻击,随后通过防御性微调来修补暴露的漏洞。这种顺序方法造成了错配——攻击者过度适应过时的防御,而防御者则始终落后于新出现的威胁。为解决这一问题,我们提出了 Self-RedTeam,一种在线自博弈强化学习算法,其中攻击者和防御者智能体通过持续交互共同进化。我们将安全对齐建模为一个双人零和博弈,其中单一模型在攻击者和防御者角色之间交替——生成对抗性提示并防范这些提示——同时由一个奖励语言模型裁定结果。这使得动态的共同适应成为可能。基于零和博弈的博弈论框架,我们建立了一个理论安全保证,这为我们的方法设计提供了动机:如果自博弈收敛至纳什均衡,防御者将能可靠地对任何对抗性输入产生安全响应。实证结果表明,与针对静态防御者训练的攻击者相比,Self-RedTeam 发现了更多样化的攻击(SBERT 指标提升 +21.8%),并且在安全基准测试(例如,在 WildJailBreak 上提升 +65.5%)上比针对静态攻击者训练的防御者实现了更高的鲁棒性。我们进一步提出了隐藏思维链,允许智能体进行私有规划,这提升了对抗多样性并减少了过度拒绝。我们的研究结果推动语言模型安全训练从被动修补转向主动共同进化,通过多智能体强化学习(MARL)实现语言模型的可扩展、自主且鲁棒的自我改进。