In recent years, safety risks associated with large language models have become increasingly prominent, highlighting the urgent need to mitigate the generation of toxic and harmful content. The mainstream paradigm for LLM safety alignment typically adopts a collaborative framework involving three roles: an attacker for adversarial prompt generation, a defender for safety defense, and an evaluator for response assessment. In this paper, we propose a closed-loop reinforcement learning framework called TriPlay-RL that enables iterative and co-improving collaboration among three roles with near-zero manual annotation. Experimental results show that the attacker preserves high output diversity while achieving a 20%-50% improvement in adversarial effectiveness; the defender attains 10%-30% gains in safety performance without degrading general reasoning capability; and the evaluator continuously refines its fine-grained judgment ability through iterations, accurately distinguishing unsafe responses, simple refusals, and useful guidance. Overall, our framework establishes an efficient and scalable paradigm for LLM safety alignment, enabling continuous co-evolution within a unified learning loop.
翻译:近年来,大语言模型相关的安全风险日益凸显,亟需抑制其生成有害与不当内容。当前主流的大语言模型安全对齐范式通常采用包含三种角色的协作框架:负责生成对抗性提示的攻击者、负责安全防御的防御者以及负责响应评估的评估者。本文提出了一种名为TriPlay-RL的闭环强化学习框架,该框架能够在近乎无需人工标注的条件下,实现三种角色之间迭代式、协同改进的协作。实验结果表明,攻击者在保持高输出多样性的同时,对抗效果提升了20%-50%;防御者在未损害通用推理能力的情况下,安全性能获得了10%-30%的提升;评估者则通过迭代持续优化其细粒度判别能力,能够准确区分不安全回复、简单拒绝与有效指导。总体而言,本框架为大语言模型安全对齐建立了一种高效且可扩展的范式,实现了在统一学习循环内的持续协同进化。