While recent automated red-teaming methods show promise for systematically exposing model vulnerabilities, most existing approaches rely on human-specified workflows. This dependence on manually designed workflows suffers from human biases and makes exploring the broader design space expensive. We introduce AgenticRed, an automated pipeline that leverages LLMs' in-context learning to iteratively design and refine red-teaming systems without human intervention. Rather than optimizing attacker policies within predefined structures, AgenticRed treats red-teaming as a system design problem. Inspired by methods like Meta Agent Search, we develop a novel procedure for evolving agentic systems using evolutionary selection, and apply it to the problem of automatic red-teaming. Red-teaming systems designed by AgenticRed consistently outperform state-of-the-art approaches, achieving 96% attack success rate (ASR) on Llama-2-7B (36% improvement) and 98% on Llama-3-8B on HarmBench. Our approach exhibits strong transferability to proprietary models, achieving 100% ASR on GPT-3.5-Turbo and GPT-4o-mini, and 60% on Claude-Sonnet-3.5 (24% improvement). This work highlights automated system design as a powerful paradigm for AI safety evaluation that can keep pace with rapidly evolving models.
翻译:尽管近期的自动化红队测试方法在系统性暴露模型脆弱性方面展现出潜力,但现有方法大多依赖人工预设的工作流程。这种对人工设计流程的依赖不仅受限于人类认知偏差,也使得探索更广阔的设计空间成本高昂。本文提出AgenticRed——一种利用大语言模型上下文学习能力、无需人工干预即可迭代设计与优化红队测试系统的自动化流程。与在预定义结构中优化攻击策略的传统思路不同,AgenticRed将红队测试重新定义为系统设计问题。受元智能体搜索等方法的启发,我们开发了一种基于进化选择机制的智能体系统演化新流程,并将其应用于自动化红队测试领域。经AgenticRed设计的红队测试系统在多项基准测试中持续超越现有最优方法:在Llama-2-7B模型上实现96%的攻击成功率(相对提升36%),在Llama-3-8B模型上达到98%的HarmBench攻击成功率。该方法展现出卓越的跨模型迁移能力:对GPT-3.5-Turbo和GPT-4o-mini实现100%攻击成功率,对Claude-Sonnet-3.5达到60%成功率(相对提升24%)。本研究表明,自动化系统设计可作为适应快速演进模型的重要范式,为人工智能安全评估提供强大方法论支撑。