In recent years, the rapid development of large language models (LLMs) has achieved remarkable performance across various tasks. However, research indicates that LLMs are vulnerable to jailbreak attacks, where adversaries can induce the generation of harmful content through meticulously crafted prompts. This vulnerability poses significant challenges to the secure use and promotion of LLMs. Existing defense methods offer protection from different perspectives but often suffer from insufficient effectiveness or a significant impact on the model's capabilities. In this paper, we propose a plug-and-play and easy-to-deploy jailbreak defense framework, namely Prefix Guidance (PG), which guides the model to identify harmful prompts by directly setting the first few tokens of the model's output. This approach combines the model's inherent security capabilities with an external classifier to defend against jailbreak attacks. We demonstrate the effectiveness of PG across three models and five attack methods. Compared to baselines, our approach is generally more effective on average. Additionally, results on the Just-Eval benchmark further confirm PG's superiority to preserve the model's performance. our code is available at https://github.com/weiyezhimeng/Prefix-Guidance.
翻译:近年来,大语言模型(LLMs)的快速发展在各类任务中取得了显著性能。然而,研究表明LLMs容易受到越狱攻击,攻击者可通过精心设计的提示诱导模型生成有害内容。这一脆弱性对LLMs的安全使用与推广构成了重大挑战。现有防御方法从不同角度提供保护,但往往存在有效性不足或对模型能力产生显著影响的问题。本文提出一种即插即用、易于部署的越狱防御框架——前缀引导(Prefix Guidance, PG),该方法通过直接设定模型输出序列的前几个词元,引导模型识别有害提示。该框架结合模型固有的安全能力与外部分类器,共同防御越狱攻击。我们在三种模型和五种攻击方法上验证了PG的有效性。与基线方法相比,我们的方法在平均表现上普遍更为有效。此外,在Just-Eval基准测试上的结果进一步证实了PG在保持模型性能方面的优越性。代码已开源:https://github.com/weiyezhimeng/Prefix-Guidance。