Manual Red teaming is a commonly-used method to identify vulnerabilities in large language models (LLMs), which, is costly and unscalable. In contrast, automated red teaming uses a Red LLM to automatically generate adversarial prompts to the Target LLM, offering a scalable way for safety vulnerability detection. However, the difficulty of building a powerful automated Red LLM lies in the fact that the safety vulnerabilities of the Target LLM are dynamically changing with the evolution of the Target LLM. To mitigate this issue, we propose a Deep Adversarial Automated Red Teaming (DART) framework in which the Red LLM and Target LLM are deeply and dynamically interacting with each other in an iterative manner. In each iteration, in order to generate successful attacks as many as possible, the Red LLM not only takes into account the responses from the Target LLM, but also adversarially adjust its attacking directions by monitoring the global diversity of generated attacks across multiple iterations. Simultaneously, to explore dynamically changing safety vulnerabilities of the Target LLM, we allow the Target LLM to enhance its safety via an active learning based data selection mechanism. Experimential results demonstrate that DART significantly reduces the safety risk of the target LLM. For human evaluation on Anthropic Harmless dataset, compared to the instruction-tuning target LLM, DART eliminates the violation risks by 53.4\%. We will release the datasets and codes of DART soon.
翻译:人工红队测试是识别大型语言模型(LLM)漏洞的常用方法,但成本高昂且难以扩展。相比之下,自动化红队测试使用红队LLM自动生成针对目标LLM的对抗性提示,为安全漏洞检测提供了可扩展的途径。然而,构建强大的自动化红队LLM的难点在于,目标LLM的安全漏洞会随着其自身演化而动态变化。为缓解这一问题,我们提出了深度对抗式自动化红队测试(DART)框架,其中红队LLM与目标LLM以迭代方式进行深度动态交互。在每次迭代中,为尽可能多地生成成功攻击,红队LLM不仅考虑目标LLM的响应,还通过监控多轮迭代中生成攻击的全局多样性来对抗性地调整其攻击方向。同时,为探索目标LLM动态变化的安全漏洞,我们允许目标LLM通过基于主动学习的数据选择机制来增强其安全性。实验结果表明,DART能显著降低目标LLM的安全风险。在Anthropic Harmless数据集上进行人工评估时,相较于经过指令微调的目标LLM,DART将违规风险降低了53.4%。我们将很快发布DART的数据集与代码。