As large language models (LLMs) continue to advance in capability and influence, ensuring their security and preventing harmful outputs has become crucial. A promising approach to address these concerns involves training models to automatically generate adversarial prompts for red teaming. However, the evolving subtlety of vulnerabilities in LLMs challenges the effectiveness of current adversarial methods, which struggle to specifically target and explore the weaknesses of these models. To tackle these challenges, we introduce the $\mathbf{S}\text{elf-}\mathbf{E}\text{volving }\mathbf{A}\text{dversarial }\mathbf{S}\text{afety }\mathbf{(SEAS)}$ optimization framework, which enhances security by leveraging data generated by the model itself. SEAS operates through three iterative stages: Initialization, Attack, and Adversarial Optimization, refining both the Red Team and Target models to improve robustness and safety. This framework reduces reliance on manual testing and significantly enhances the security capabilities of LLMs. Our contributions include a novel adversarial framework, a comprehensive safety dataset, and after three iterations, the Target model achieves a security level comparable to GPT-4, while the Red Team model shows a marked increase in attack success rate (ASR) against advanced models. Our code and datasets are released at https://SEAS-LLM.github.io/.
翻译:随着大语言模型(LLMs)在能力与影响力上的持续进步,确保其安全性并防止有害输出变得至关重要。一种应对这些问题的有效方法是训练模型自动生成用于红队测试的对抗性提示。然而,LLMs中漏洞的演化日趋隐蔽,这对现有对抗性方法的有效性提出了挑战——这些方法难以精准定位并深入探测模型的弱点。为应对这些挑战,我们提出了自演进对抗安全优化框架(Self-Evolving Adversarial Safety Optimization, $\mathbf{SEAS}$),该框架通过利用模型自身生成的数据来增强安全性。SEAS通过三个迭代阶段运行:初始化、攻击与对抗优化,同步优化红队模型与目标模型以提升其鲁棒性与安全性。该框架降低了对人工测试的依赖,并显著增强了LLMs的安全防护能力。我们的贡献包括:一种新颖的对抗性框架、一套全面的安全数据集;经过三次迭代后,目标模型达到了与GPT-4相当的安全水平,而红队模型对先进模型的攻击成功率(ASR)则呈现出显著提升。我们的代码与数据集已发布于 https://SEAS-LLM.github.io/。