Large language models (LLMs) are shown to be vulnerable to jailbreaking attacks where adversarial prompts are designed to elicit harmful responses. While existing defenses effectively mitigate single-turn attacks by detecting and filtering unsafe inputs, they fail against multi-turn jailbreaks that exploit contextual drift over multiple interactions, gradually leading LLMs away from safe behavior. To address this challenge, we propose a safety steering framework grounded in safe control theory, ensuring invariant safety in multi-turn dialogues. Our approach models the dialogue with LLMs using state-space representations and introduces a novel neural barrier function (NBF) to detect and filter harmful queries emerging from evolving contexts proactively. Our method achieves invariant safety at each turn of dialogue by learning a safety predictor that accounts for adversarial queries, preventing potential context drift toward jailbreaks. Extensive experiments under multiple LLMs show that our NBF-based safety steering outperforms safety alignment, prompt-based steering and lightweight LLM guardrails baselines, offering stronger defenses against multi-turn jailbreaks while maintaining a better trade-off among safety, helpfulness and over-refusal. Check out the website here https://sites.google.com/view/llm-nbf/home.
翻译:大型语言模型(LLMs)已被证明容易受到越狱攻击,其中对抗性提示被设计用于引发有害响应。虽然现有防御方法通过检测和过滤不安全输入,能有效缓解单轮攻击,但它们无法应对利用多轮交互中上下文漂移的多轮越狱攻击,这些攻击会逐渐引导LLMs偏离安全行为。为应对这一挑战,我们提出了一种基于安全控制理论的安全引导框架,确保多轮对话中的不变安全性。我们的方法使用状态空间表示对LLMs的对话进行建模,并引入了一种新颖的神经屏障函数(NBF),以主动检测和过滤从演化上下文中出现的恶意查询。通过学习一个考虑对抗性查询的安全性预测器,我们的方法在对话的每一轮都实现了不变安全性,防止潜在的上下文漂移导致越狱。在多个LLMs上进行的大量实验表明,我们基于NBF的安全引导方法在安全性、基于提示的引导以及轻量级LLM护栏基线方面表现更优,提供了针对多轮越狱攻击的更强防御,同时在安全性、帮助性和过度拒绝之间保持了更好的权衡。请访问网站 https://sites.google.com/view/llm-nbf/home 查看详情。