Deployable Large Language Models (LLMs) must conform to the criterion of helpfulness and harmlessness, thereby achieving consistency between LLMs outputs and human values. Red-teaming techniques constitute a critical way towards this criterion. Existing work rely solely on manual red team designs and heuristic adversarial prompts for vulnerability detection and optimization. These approaches lack rigorous mathematical formulation, thus limiting the exploration of diverse attack strategy within quantifiable measure and optimization of LLMs under convergence guarantees. In this paper, we present Red-teaming Game (RTG), a general game-theoretic framework without manual annotation. RTG is designed for analyzing the multi-turn attack and defense interactions between Red-team language Models (RLMs) and Blue-team Language Model (BLM). Within the RTG, we propose Gamified Red-teaming Solver (GRTS) with diversity measure of the semantic space. GRTS is an automated red teaming technique to solve RTG towards Nash equilibrium through meta-game analysis, which corresponds to the theoretically guaranteed optimization direction of both RLMs and BLM. Empirical results in multi-turn attacks with RLMs show that GRTS autonomously discovered diverse attack strategies and effectively improved security of LLMs, outperforming existing heuristic red-team designs. Overall, RTG has established a foundational framework for red teaming tasks and constructed a new scalable oversight technique for alignment.
翻译:可部署的大型语言模型(LLMs)必须符合有益性和无害性的标准,从而实现LLMs输出与人类价值观的一致性。红队测试技术是实现这一标准的关键途径。现有工作仅依赖人工红队设计和启发式对抗性提示进行漏洞检测与优化,这些方法缺乏严格的数学形式化,从而限制了在可量化度量下探索多样化攻击策略以及基于收敛性保证优化LLMs的能力。本文提出了一种无需人工标注的通用博弈论框架——红队游戏(RTG)。RTG旨在分析红队语言模型(RLMs)与蓝队语言模型(BLM)之间的多轮攻击与防御交互。在RTG框架内,我们提出了结合语义空间多样性度量的游戏化红队求解器(GRTS)。GRTS是一种自动化红队技术,通过元博弈分析解决RTG中纳什均衡的求解问题,对应RLMs与BLM具有理论保证的优化方向。基于RLMs的多轮攻击实验结果表明,GRTS能够自主发现多样化攻击策略并有效提升LLMs的安全性,性能优于现有启发式红队设计。总体而言,RTG为红队测试任务建立了基础性框架,并构建了一种用于对齐训练的新型可扩展监督技术。