Generative models are increasingly paired with safety classifiers that filter harmful or undesirable outputs. A common strategy is to fine-tune the generator to reduce the probability of being filtered, but this can be suboptimal: it often pushes the model toward producing samples near the classifier's decision boundary, increasing both false positives and false negatives. We propose Boundary Guidance, a reinforcement learning fine-tuning method that explicitly steers generation away from the classifier's margin. On a benchmark of jailbreak, ambiguous, and longcontext prompts, Boundary Guidance improves both the safety and the utility of outputs, as judged by LLM-as-a-Judge evaluations. Comprehensive ablations across model scales and reward designs demonstrate the robustness of our approach.
翻译:生成模型日益与安全分类器配对,用于过滤有害或不期望的输出。一种常见策略是对生成器进行微调以降低被过滤的概率,但这可能并非最优:这通常会将模型推向产生接近分类器决策边界的样本,从而增加假阳性和假阴性。我们提出边界引导,一种强化学习微调方法,明确引导生成远离分类器的边界。在越狱、模糊和长上下文提示的基准测试中,边界引导通过LLM-as-a-Judge评估,提升了输出的安全性和实用性。跨模型规模和奖励设计的全面消融实验证明了我们方法的鲁棒性。