Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, this paper defines and investigates the Refusal Loss of LLMs and then proposes a method called Gradient Cuff to detect jailbreak attempts. Gradient Cuff exploits the unique properties observed in the refusal loss landscape, including functional values and its smoothness, to design an effective two-step detection strategy. Experimental results on two aligned LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5) and six types of jailbreak attacks (GCG, AutoDAN, PAIR, TAP, Base64, and LRL) show that Gradient Cuff can significantly improve the LLM's rejection capability for malicious jailbreak queries, while maintaining the model's performance for benign user queries by adjusting the detection threshold.
翻译:大型语言模型(LLMs)正成为一种重要的生成式人工智能工具,用户输入查询,LLM 生成回答。为减少危害和滥用,人们通过强化学习人类反馈(RLHF)等先进训练技术,努力使这些 LLM 与人类价值观对齐。然而,最近的研究凸显了 LLM 易受旨在颠覆内置安全防护的对抗性越狱尝试攻击的脆弱性。为应对这一挑战,本文定义并研究了 LLM 的拒绝损失,随后提出了一种名为梯度袖口的方法来检测越狱尝试。梯度袖口利用在拒绝损失景观中观察到的独特属性,包括函数值及其平滑性,设计了一种有效的两步检测策略。在两个对齐的 LLM(LLaMA-2-7B-Chat 和 Vicuna-7B-V1.5)以及六种越狱攻击(GCG、AutoDAN、PAIR、TAP、Base64 和 LRL)上的实验结果表明,梯度袖口能通过调整检测阈值,显著提升 LLM 对恶意越狱查询的拒绝能力,同时保持模型对良性用户查询的性能。