As large language models grow in capability and agency, identifying vulnerabilities through red-teaming becomes vital for safe deployment. However, traditional prompt-engineering approaches may prove ineffective once red-teaming turns into a \emph{weak-to-strong} problem, where target models surpass red-teamers in capabilities. To study this shift, we frame red-teaming through the lens of the \emph{capability gap} between attacker and target. We evaluate more than 600 attacker-target pairs using LLM-based jailbreak attacks that mimic human red-teamers across diverse families, sizes, and capability levels. Three strong trends emerge: (i) more capable models are better attackers, (ii) attack success drops sharply once the target's capability exceeds the attacker's, and (iii) attack success rates correlate with high performance on social science splits of the MMLU-Pro benchmark. From these observations, we derive a \emph{jailbreaking scaling curve} that predicts attack success for a fixed target based on attacker-target capability gap. These findings suggest that fixed-capability attackers (e.g., humans) may become ineffective against future models, increasingly capable open-source models amplify risks for existing systems, and model providers must accurately measure and control models' persuasive and manipulative abilities to limit their effectiveness as attackers.
翻译:随着大型语言模型能力和自主性的增强,通过红队测试识别漏洞对于安全部署变得至关重要。然而,一旦红队测试转变为\emph{弱对强}问题——即目标模型在能力上超越红队测试者时,传统的提示工程方法可能失效。为研究这一转变,我们通过攻击者与目标之间的\emph{能力差距}视角来构建红队测试框架。我们使用模拟人类红队测试者的基于LLM的越狱攻击方法,评估了涵盖不同模型家族、规模和能力水平的600多个攻击者-目标对。研究呈现出三个显著趋势:(i) 能力更强的模型是更优秀的攻击者,(ii) 当目标能力超过攻击者时,攻击成功率急剧下降,(iii) 攻击成功率与MMLU-Pro基准测试中社会科学部分的高表现呈正相关。基于这些观察,我们推导出\emph{越狱扩展曲线},该曲线可根据攻击者-目标能力差距预测对固定目标的攻击成功率。这些发现表明:固定能力的攻击者(例如人类)可能对未来模型失效;能力不断增强的开源模型会放大现有系统的风险;模型提供者必须准确测量并控制模型的说服与操纵能力,以限制其作为攻击者的有效性。