Recent work has proposed automated red-teaming methods for testing the vulnerabilities of a given target large language model (LLM). These methods use red-teaming LLMs to uncover inputs that induce harmful behavior in a target LLM. In this paper, we study red-teaming strategies that enable a targeted security assessment. We propose an optimization framework for red-teaming with proximity constraints, where the discovered prompts must be similar to reference prompts from a given dataset. This dataset serves as a template for the discovered prompts, anchoring the search for test-cases to specific topics, writing styles, or types of harmful behavior. We show that established auto-regressive model architectures do not perform well in this setting. We therefore introduce a black-box red-teaming method inspired by text-diffusion models: Diffusion for Auditing and Red-Teaming (DART). DART modifies the reference prompt by perturbing it in the embedding space, directly controlling the amount of change introduced. We systematically evaluate our method by comparing its effectiveness with established methods based on model fine-tuning and zero- and few-shot prompting. Our results show that DART is significantly more effective at discovering harmful inputs in close proximity to the reference prompt.
翻译:近期研究提出了自动化红队测试方法,用于检测特定目标大型语言模型(LLM)的潜在漏洞。这些方法利用红队测试LLM来发现能诱导目标LLM产生有害行为的输入。本文研究了实现定向安全评估的红队测试策略。我们提出了一种具有邻近约束的红队测试优化框架,要求发现的提示文本必须与给定数据集中的参考提示保持相似性。该数据集作为发现提示的模板,将测试用例的搜索范围锚定在特定主题、写作风格或有害行为类型上。研究表明,现有的自回归模型架构在此设定下表现不佳。为此,我们提出了一种受文本扩散模型启发的黑盒红队测试方法:审计与红队测试扩散模型(DART)。DART通过在嵌入空间中对参考提示进行扰动来修改原始内容,并直接控制引入的修改程度。我们通过将本方法与基于模型微调及零样本/少样本提示的现有方法进行系统比较,评估了其有效性。实验结果表明,DART在发现与参考提示高度邻近的有害输入方面具有显著优势。