Recent advancements in Reinforcement Learning with Verifiable Rewards (RLVR) have gained significant attention due to their objective and verifiable reward signals, demonstrating strong performance in reasoning and code generation tasks. However, the potential safety risks associated with RLVR remain underexplored. This paper presents HarmRLVR, the first systematic investigation into the alignment reversibility risk of RLVR. We show that safety alignment can be rapidly reversed using GRPO with merely 64 harmful prompts without responses, causing models to readily comply with harmful instructions. Across five models from Llama, Qwen, and DeepSeek, we empirically demonstrate that RLVR-based attacks elevate the average harmfulness score to 4.94 with an attack success rate of 96.01\%, significantly outperforming harmful fine-tuning while preserving general capabilities. Our findings reveal that RLVR can be efficiently exploited for harmful alignment, posing serious threats to open-source model safety. Please see our code at https://github.com/lyxx2535/HarmRLVR.
翻译:近年来,基于可验证奖励的强化学习(RLVR)因其客观且可验证的奖励信号而受到广泛关注,在推理和代码生成任务中展现出强大性能。然而,与RLVR相关的潜在安全风险仍未得到充分探索。本文提出了HarmRLVR,这是对RLVR对齐可逆性风险的首次系统性研究。我们证明,仅需使用64个不含回复的有害提示,通过GRPO即可快速逆转安全对齐,导致模型轻易遵从有害指令。在涵盖Llama、Qwen和DeepSeek的五个模型上,我们通过实验证实,基于RLVR的攻击将平均有害分数提升至4.94,攻击成功率高达96.01%,显著优于有害微调方法,同时保持了模型的通用能力。我们的研究揭示了RLVR可被高效利用于有害对齐,对开源模型安全构成严重威胁。代码详见https://github.com/lyxx2535/HarmRLVR。