The rapid integration of Multimodal Large Language Models (MLLMs) into critical applications is increasingly hindered by persistent safety vulnerabilities. However, existing red-teaming benchmarks are often fragmented, limited to single-turn text interactions, and lack the scalability required for systematic evaluation. To address this, we introduce OpenRT, a unified, modular, and high-throughput red-teaming framework designed for comprehensive MLLM safety evaluation. At its core, OpenRT architects a paradigm shift in automated red-teaming by introducing an adversarial kernel that enables modular separation across five critical dimensions: model integration, dataset management, attack strategies, judging methods, and evaluation metrics. By standardizing attack interfaces, it decouples adversarial logic from a high-throughput asynchronous runtime, enabling systematic scaling across diverse models. Our framework integrates 37 diverse attack methodologies, spanning white-box gradients, multi-modal perturbations, and sophisticated multi-agent evolutionary strategies. Through an extensive empirical study on 20 advanced models (including GPT-5.2, Claude 4.5, and Gemini 3 Pro), we expose critical safety gaps: even frontier models fail to generalize across attack paradigms, with leading models exhibiting average Attack Success Rates as high as 49.14%. Notably, our findings reveal that reasoning models do not inherently possess superior robustness against complex, multi-turn jailbreaks. By open-sourcing OpenRT, we provide a sustainable, extensible, and continuously maintained infrastructure that accelerates the development and standardization of AI safety.
翻译:多模态大语言模型在关键应用中的快速集成,正日益受到其持续存在的安全漏洞的阻碍。然而,现有的红队测试基准往往零散、仅限于单轮文本交互,并且缺乏系统评估所需的可扩展性。为解决此问题,我们引入了OpenRT,这是一个为全面评估MLLM安全性而设计的统一、模块化、高吞吐量的红队测试框架。其核心在于,OpenRT通过引入一个对抗性内核,在自动化红队测试领域构建了一种范式转变。该内核实现了五个关键维度的模块化分离:模型集成、数据集管理、攻击策略、判断方法和评估指标。通过标准化攻击接口,它将对抗逻辑与一个高吞吐量的异步运行时解耦,从而能够在不同模型间进行系统性扩展。我们的框架集成了37种不同的攻击方法,涵盖白盒梯度、多模态扰动以及复杂的多智能体进化策略。通过对20个先进模型(包括GPT-5.2、Claude 4.5和Gemini 3 Pro)进行广泛的实证研究,我们揭示了关键的安全缺陷:即使是前沿模型也无法泛化到不同的攻击范式,领先模型的平均攻击成功率高达49.14%。值得注意的是,我们的研究结果表明,推理模型在面对复杂的多轮越狱攻击时,并不天然具备更强的鲁棒性。通过开源OpenRT,我们提供了一个可持续、可扩展且持续维护的基础设施,以加速AI安全性的发展与标准化。