AI-text detectors face a critical robustness challenge: adversarial paraphrasing attacks that preserve semantics while evading detection. We introduce StealthRL, a reinforcement learning framework that stress-tests detector robustness under realistic adversarial conditions. StealthRL trains a paraphrase policy against a multi-detector ensemble using Group Relative Policy Optimization (GRPO) with LoRA adapters on Qwen3-4B, optimizing a composite reward that balances detector evasion with semantic preservation. We evaluate six attack settings (M0-M5) against three detector families (RoBERTa, FastDetectGPT, and Binoculars) at the security-relevant 1% false positive rate operating point. StealthRL achieves near-zero detection (0.001 mean TPR@1%FPR), reduces mean AUROC from 0.74 to 0.27, and attains a 99.9% attack success rate. Critically, attacks transfer to a held-out detector family not seen during training, revealing shared architectural vulnerabilities rather than detector-specific brittleness. We additionally conduct LLM-based quality evaluation via Likert scoring, analyze detector score distributions to explain why evasion succeeds, and provide per-detector AUROC with bootstrap confidence intervals. Our results expose significant robustness gaps in current AI-text detection and establish StealthRL as a principled adversarial evaluation protocol. Code and evaluation pipeline are publicly available at https://github.com/suraj-ranganath/StealthRL.
翻译:AI文本检测器面临严峻的鲁棒性挑战:能够在保持语义的同时规避检测的对抗性复述攻击。本文提出StealthRL,一个在真实对抗条件下对检测器鲁棒性进行压力测试的强化学习框架。StealthRL基于Qwen3-4B模型,采用带LoRA适配器的组相对策略优化(GRPO)方法,针对多检测器集成系统训练复述策略,通过优化一个平衡检测器规避与语义保持的复合奖励函数。我们在安全相关的1%误报率工作点上,评估了针对三类检测器家族(RoBERTa、FastDetectGPT和Binoculars)的六种攻击配置(M0-M5)。StealthRL实现了接近零的检测率(TPR@1%FPR均值0.001),将平均AUROC从0.74降至0.27,并达到99.9%的攻击成功率。关键的是,攻击能够迁移到训练过程中未见的保留检测器家族,这揭示了共享的架构脆弱性而非特定检测器的缺陷。我们还通过李克特量表评分进行了基于LLM的质量评估,分析了检测器分数分布以解释规避成功的原因,并提供了带自助法置信区间的各检测器AUROC。我们的研究结果揭示了当前AI文本检测存在的显著鲁棒性缺陷,并将StealthRL确立为一种原则性的对抗性评估协议。代码与评估流程已在https://github.com/suraj-ranganath/StealthRL公开提供。