This study addresses a critical gap in safety tuning practices for Large Language Models (LLMs) by identifying and tackling a refusal position bias within safety tuning data, which compromises the models' ability to appropriately refuse generating unsafe content. We introduce a novel approach, Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse compliance to harmful prompts at any response position, significantly enhancing their safety capabilities. DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation (MLE) with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful response sequence. Our empirical evaluation, conducted using LLaMA3 and Mistral model families across six attack scenarios, demonstrates that our method not only improves model safety without compromising performance but also surpasses well-known models such as GPT-4 in defending against attacks. Importantly, our approach successfully defends recent advanced attack methods (e.g., CodeAttack) that have jailbroken GPT-4 and LLaMA3-70B-Instruct. Our code and data can be found at https://github.com/RobustNLP/DeRTa.
翻译:本研究通过识别并解决安全调优数据中存在的拒绝位置偏差,填补了大语言模型安全调优实践中的一个关键空白,这种偏差会损害模型适当拒绝生成不安全内容的能力。我们提出了一种创新方法——解耦拒绝训练,旨在使大语言模型能够在任何响应位置拒绝执行有害指令,显著增强其安全防护能力。该方法包含两个新颖组件:(1)带有害响应前缀的最大似然估计:通过在安全响应起始处附加有害响应片段,训练模型识别并规避不安全内容;(2)强化转换优化:使模型具备在整个有害响应序列中持续从潜在危害转向安全拒绝的能力。我们使用LLaMA3和Mistral模型系列在六种攻击场景下进行的实证评估表明,该方法不仅能提升模型安全性且不影响性能,还在防御攻击方面超越了GPT-4等知名模型。值得注意的是,我们的方法成功防御了近期已攻破GPT-4和LLaMA3-70B-Instruct的先进攻击方法。相关代码与数据可在https://github.com/RobustNLP/DeRTa获取。