Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content. Evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address. First, there is no clear standard of practice regarding jailbreaking evaluation. Second, existing works compute costs and success rates in incomparable ways. And third, numerous works are not reproducible, as they withhold adversarial prompts, involve closed-source code, or rely on evolving proprietary APIs. To address these challenges, we introduce JailbreakBench, an open-sourced benchmark with the following components: (1) an evolving repository of state-of-the-art adversarial prompts, which we refer to as jailbreak artifacts; (2) a jailbreaking dataset comprising 100 behaviors -- both original and sourced from prior work (Zou et al., 2023; Mazeika et al., 2023, 2024) -- which align with OpenAI's usage policies; (3) a standardized evaluation framework at https://github.com/JailbreakBench/jailbreakbench that includes a clearly defined threat model, system prompts, chat templates, and scoring functions; and (4) a leaderboard at https://jailbreakbench.github.io/ that tracks the performance of attacks and defenses for various LLMs. We have carefully considered the potential ethical implications of releasing this benchmark, and believe that it will be a net positive for the community.
翻译:越狱攻击会导致大型语言模型(LLMs)生成有害、不道德或其他令人反感的内容。评估这些攻击面临诸多挑战,而现有的基准测试集合和评估技术未能充分应对这些挑战。首先,目前缺乏明确的越狱评估实践标准。其次,现有工作以不可比较的方式计算成本和成功率。第三,许多研究因保留对抗性提示、涉及闭源代码或依赖不断演变的专有API而无法复现。为应对这些挑战,我们推出了JailbreakBench,这是一个包含以下组件的开源基准测试框架:(1)一个持续更新的最先进对抗性提示库,我们称之为越狱工件;(2)一个包含100种行为的越狱数据集——既包含原创行为,也收录自先前研究(Zou et al., 2023; Mazeika et al., 2023, 2024)——这些行为均符合OpenAI的使用政策;(3)位于 https://github.com/JailbreakBench/jailbreakbench 的标准化评估框架,包含明确定义的威胁模型、系统提示、对话模板和评分函数;(4)位于 https://jailbreakbench.github.io/ 的排行榜,用于追踪不同LLMs的攻击与防御性能。我们已审慎考虑发布该基准可能涉及的伦理影响,并相信这将为研究社区带来积极贡献。