While recent automated red-teaming methods show promise for systematically exposing model vulnerabilities, most existing approaches rely on human-specified workflows. This dependence on manually designed workflows suffers from human biases and makes exploring the broader design space expensive. We introduce AgenticRed, an automated pipeline that leverages LLMs' in-context learning to iteratively design and refine red-teaming systems without human intervention. Rather than optimizing attacker policies within predefined structures, AgenticRed treats red-teaming as a system design problem. Inspired by methods like Meta Agent Search, we develop a novel procedure for evolving agentic systems using evolutionary selection, and apply it to the problem of automatic red-teaming. Red-teaming systems designed by AgenticRed consistently outperform state-of-the-art approaches, achieving 96% attack success rate (ASR) on Llama-2-7B (36% improvement) and 98% on Llama-3-8B on HarmBench. Our approach exhibits strong transferability to proprietary models, achieving 100% ASR on GPT-3.5-Turbo and GPT-4o, and 60% on Claude-Sonnet-3.5 (24% improvement). This work highlights automated system design as a powerful paradigm for AI safety evaluation that can keep pace with rapidly evolving models.
翻译:尽管近期的自动化红队测试方法在系统性暴露模型脆弱性方面展现出潜力,但现有方案大多依赖人工预设的工作流程。这种对人工设计流程的依赖不仅存在人为偏见,也使得探索更广阔的设计空间成本高昂。本文提出AgenticRed——一种利用大语言模型上下文学习能力、无需人工干预即可迭代设计与优化红队测试系统的自动化流程。与在预定义结构中优化攻击策略的传统思路不同,AgenticRed将红队测试视为系统设计问题。受元智能体搜索等方法的启发,我们开发了一种基于进化选择机制的智能体系统演化新方法,并将其应用于自动化红队测试领域。经AgenticRed设计的红队测试系统在多项基准测试中持续超越现有最优方案:在Llama-2-7B上实现96%的攻击成功率(提升36%),在HarmBench基准的Llama-3-8B上达到98%。该方法展现出对专有模型的强迁移能力,在GPT-3.5-Turbo和GPT-4o上实现100%攻击成功率,在Claude-Sonnet-3.5上达到60%(提升24%)。本研究表明,自动化系统设计可作为适应快速演进模型的人工智能安全评估新范式。