Large Language Models (LLMs) have rapidly become integral to real-world applications, powering services across diverse sectors. However, their widespread deployment has exposed critical security risks, particularly through jailbreak prompts that can bypass model alignment and induce harmful outputs. Despite intense research into both attack and defense techniques, the field remains fragmented: definitions, threat models, and evaluation criteria vary widely, impeding systematic progress and fair comparison. In this Systematization of Knowledge (SoK), we address these challenges by (1) proposing a holistic, multi-level taxonomy that organizes attacks, defenses, and vulnerabilities in LLM prompt security; (2) formalizing threat models and cost assumptions into machine-readable profiles for reproducible evaluation; (3) introducing an open-source evaluation toolkit for standardized, auditable comparison of attacks and defenses; (4) releasing JAILBREAKDB, the largest annotated dataset of jailbreak and benign prompts to date; and (5) presenting a comprehensive evaluation and leaderboard of state-of-the-art methods. Our work unifies fragmented research, provides rigorous foundations for future studies, and supports the development of robust, trustworthy LLMs suitable for high-stakes deployment.
翻译:大型语言模型(LLMs)已迅速成为现实应用的核心组成部分,为各领域的服务提供支持。然而,其广泛部署暴露了严重的安全风险,特别是通过越狱提示(jailbreak prompts)可绕过模型对齐机制并诱导有害输出。尽管针对攻击与防御技术的研究日益深入,该领域仍处于碎片化状态:定义、威胁模型和评估标准差异显著,阻碍了系统性进展与公平比较。本知识系统化研究(SoK)通过以下方式应对这些挑战:(1)提出一个整体性、多层次分类法,系统梳理LLM提示安全中的攻击、防御与漏洞;(2)将威胁模型与成本假设形式化为机器可读配置文件,以实现可复现的评估;(3)推出开源评估工具包,支持攻击与防御方法的标准化、可审计比较;(4)发布迄今规模最大的标注数据集JAILBREAKDB,涵盖越狱提示与良性提示;(5)呈现对前沿方法的全面评估与性能排行榜。本工作整合了碎片化的研究,为未来研究奠定严谨基础,并助力开发适用于高风险部署场景的鲁棒、可信赖的大型语言模型。