Large Language Models (LLMs) have performed exceptionally in various text-generative tasks, including question answering, translation, code completion, etc. However, the over-assistance of LLMs has raised the challenge of "jailbreaking", which induces the model to generate malicious responses against the usage policy and society by designing adversarial prompts. With the emergence of jailbreak attack methods exploiting different vulnerabilities in LLMs, the corresponding safety alignment measures are also evolving. In this paper, we propose a comprehensive and detailed taxonomy of jailbreak attack and defense methods. For instance, the attack methods are divided into black-box and white-box attacks based on the transparency of the target model. Meanwhile, we classify defense methods into prompt-level and model-level defenses. Additionally, we further subdivide these attack and defense methods into distinct sub-classes and present a coherent diagram illustrating their relationships. We also conduct an investigation into the current evaluation methods and compare them from different perspectives. Our findings aim to inspire future research and practical implementations in safeguarding LLMs against adversarial attacks. Above all, although jailbreak remains a significant concern within the community, we believe that our work enhances the understanding of this domain and provides a foundation for developing more secure LLMs.
翻译:大型语言模型(LLMs)在各种文本生成任务中表现卓越,包括问答、翻译、代码补全等。然而,LLMs的过度协助也引发了“越狱”挑战,即通过设计对抗性提示,诱导模型生成违反使用政策和社会伦理的恶意回复。随着利用LLMs不同漏洞的越狱攻击方法不断涌现,相应的安全对齐措施也在不断发展。本文提出了一种全面且详细的越狱攻击与防御方法分类体系。例如,根据目标模型的透明度,攻击方法可分为黑盒攻击与白盒攻击。同时,我们将防御方法分为提示级防御与模型级防御。此外,我们进一步将这些攻击与防御方法细分为不同的子类,并绘制了阐明其关系的连贯图示。我们还对当前的评估方法进行了调研,并从不同角度进行了比较。我们的研究结果旨在启发未来在保护LLMs免受对抗攻击方面的研究与实际应用。最重要的是,尽管越狱问题仍是该领域的重要关切,我们相信本工作增进了对该领域的理解,并为开发更安全的LLMs奠定了基础。