Deployable Large Language Models (LLMs) must conform to the criterion of helpfulness and harmlessness, thereby achieving consistency between LLMs outputs and human values. Red-teaming techniques constitute a critical way towards this criterion. Existing work rely solely on manual red team designs and heuristic adversarial prompts for vulnerability detection and optimization. These approaches lack rigorous mathematical formulation, thus limiting the exploration of diverse attack strategy within quantifiable measure and optimization of LLMs under convergence guarantees. In this paper, we present Red-teaming Game (RTG), a general game-theoretic framework without manual annotation. RTG is designed for analyzing the multi-turn attack and defense interactions between Red-team language Models (RLMs) and Blue-team Language Model (BLM). Within the RTG, we propose Gamified Red-teaming Solver (GRTS) with diversity measure of the semantic space. GRTS is an automated red teaming technique to solve RTG towards Nash equilibrium through meta-game analysis, which corresponds to the theoretically guaranteed optimization direction of both RLMs and BLM. Empirical results in multi-turn attacks with RLMs show that GRTS autonomously discovered diverse attack strategies and effectively improved security of LLMs, outperforming existing heuristic red-team designs. Overall, RTG has established a foundational framework for red teaming tasks and constructed a new scalable oversight technique for alignment.
翻译:可部署的大型语言模型必须符合有益性与无害性标准,从而实现模型输出与人类价值观的一致性。红队测试技术是实现这一标准的关键途径。现有工作仅依赖人工红队设计和启发式对抗性提示进行漏洞检测与优化,缺乏严格的数学建模,难以在可量化指标下探索多样化攻击策略,也无法在收敛保证下优化语言模型。本文提出红队博弈(RTG),一种无需人工标注的通用博弈论框架,专门用于分析红队语言模型与蓝队语言模型之间的多轮攻防交互。基于该框架,我们提出带有语义空间多样性度量的博弈化红队求解器(GRTS)。该自动化红队技术通过元博弈分析求解RTG的纳什均衡,对应红蓝双方模型理论保证的优化方向。多轮攻击实验结果表明,GRTS能自主发现多样化攻击策略,有效提升语言模型安全性,性能优于现有启发式红队设计。总体而言,RTG为红队测试任务建立了基础框架,并构建了一种可实现规模化的可扩展对齐监督技术。