Large Language Models (LLMs), such as ChatGPT and GPT-4, are designed to provide useful and safe responses. However, adversarial prompts known as 'jailbreaks' can circumvent safeguards, leading LLMs to generate potentially harmful content. Exploring jailbreak prompts can help to better reveal the weaknesses of LLMs and further steer us to secure them. Unfortunately, existing jailbreak methods either suffer from intricate manual design or require optimization on other white-box models, which compromises either generalization or efficiency. In this paper, we generalize jailbreak prompt attacks into two aspects: (1) Prompt Rewriting and (2) Scenario Nesting. Based on this, we propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts. Extensive experiments demonstrate that ReNeLLM significantly improves the attack success rate while greatly reducing the time cost compared to existing baselines. Our study also reveals the inadequacy of current defense methods in safeguarding LLMs. Finally, we analyze the failure of LLMs defense from the perspective of prompt execution priority, and propose corresponding defense strategies. We hope that our research can catalyze both the academic community and LLMs developers towards the provision of safer and more regulated LLMs. The code is available at https://github.com/NJUNLP/ReNeLLM.
翻译:大型语言模型(LLM),如ChatGPT和GPT-4,旨在提供有用且安全的响应。然而,被称为"越狱"的对抗性提示可绕过安全防护,导致LLM生成潜在有害内容。探索越狱提示有助于更充分地揭示LLM的弱点,进而引导我们强化其安全性。不幸的是,现有越狱方法要么依赖繁琐的人工设计,要么需要在其他白盒模型上进行优化,从而牺牲了泛化性或效率。本文从两个维度概括越狱提示攻击:(1)提示重写与(2)场景嵌套。在此基础上,我们提出ReNeLLM——一种利用LLM自身生成高效越狱提示的自动化框架。大量实验表明,与现有基线方法相比,ReNeLLM显著提升了攻击成功率,同时大幅降低了时间成本。我们的研究还揭示了当前防御方法在保护LLM方面的不足。最后,我们从提示执行优先级的角度分析了LLM防御失效的原因,并提出了相应的防御策略。我们期望这项研究能够推动学术界和LLM开发者共同提供更安全、更规范的LLM。代码已开源至https://github.com/NJUNLP/ReNeLLM。