Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content. Evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address. First, there is no clear standard of practice regarding jailbreaking evaluation. Second, existing works compute costs and success rates in incomparable ways. And third, numerous works are not reproducible, as they withhold adversarial prompts, involve closed-source code, or rely on evolving proprietary APIs. To address these challenges, we introduce JailbreakBench, an open-sourced benchmark with the following components: (1) an evolving repository of state-of-the-art adversarial prompts, which we refer to as jailbreak artifacts; (2) a jailbreaking dataset comprising 100 behaviors -- both original and sourced from prior work (Zou et al., 2023; Mazeika et al., 2023, 2024) -- which align with OpenAI's usage policies; (3) a standardized evaluation framework at https://github.com/JailbreakBench/jailbreakbench that includes a clearly defined threat model, system prompts, chat templates, and scoring functions; and (4) a leaderboard at https://jailbreakbench.github.io/ that tracks the performance of attacks and defenses for various LLMs. We have carefully considered the potential ethical implications of releasing this benchmark, and believe that it will be a net positive for the community.
翻译:越狱攻击会导致大型语言模型(LLMs)生成有害、不道德或其他不当内容。评估这些攻击面临诸多挑战,而现有的基准测试集合与评估技术均未能充分应对。首先,目前缺乏明确的越狱评估实践标准。其次,现有研究以不可比较的方式计算攻击成本与成功率。再者,大量研究因未公开对抗性提示、涉及闭源代码或依赖持续演变的专有API而难以复现。为应对这些挑战,我们推出JailbreakBench——一个包含以下组件的开源基准:(1) 持续更新的最先进对抗性提示库(我们称之为越狱构件);(2) 包含100项行为的越狱数据集(既有原创行为,也收录自Zou等人(2023)与Mazeika等人(2023, 2024)的研究),这些行为均符合OpenAI使用政策;(3) 标准化评估框架(https://github.com/JailbreakBench/jailbreakbench),涵盖明确定义的威胁模型、系统提示、对话模板及评分函数;(4) 实时追踪各LLM攻防性能的排行榜(https://jailbreakbench.github.io/)。我们已审慎考量发布该基准可能涉及的伦理影响,并确信其将为研究社区带来积极贡献。