Large Language Models (LLMs) have showcased their remarkable capabilities in diverse domains, encompassing natural language understanding, translation, and even code generation. The potential for LLMs to generate harmful content is a significant concern. This risk necessitates rigorous testing and comprehensive evaluation of LLMs to ensure safe and responsible use. However, extensive testing of LLMs requires substantial computational resources, making it an expensive endeavor. Therefore, exploring cost-saving strategies during the testing phase is crucial to balance the need for thorough evaluation with the constraints of resource availability. To address this, our approach begins by transferring the moderation knowledge from an LLM to a small model. Subsequently, we deploy two distinct strategies for generating malicious queries: one based on a syntax tree approach, and the other leveraging an LLM-based method. Finally, our approach incorporates a sequential filter-test process designed to identify test cases that are prone to eliciting toxic responses. Our research evaluated the efficacy of DistillSeq across four LLMs: GPT-3.5, GPT-4.0, Vicuna-13B, and Llama-13B. In the absence of DistillSeq, the observed attack success rates on these LLMs stood at 31.5% for GPT-3.5, 21.4% for GPT-4.0, 28.3% for Vicuna-13B, and 30.9% for Llama-13B. However, upon the application of DistillSeq, these success rates notably increased to 58.5%, 50.7%, 52.5%, and 54.4%, respectively. This translated to an average escalation in attack success rate by a factor of 93.0% when compared to scenarios without the use of DistillSeq. Such findings highlight the significant enhancement DistillSeq offers in terms of reducing the time and resource investment required for effectively testing LLMs.
翻译:大语言模型(LLMs)已在自然语言理解、翻译乃至代码生成等多个领域展现出卓越能力。然而,LLMs生成有害内容的潜在风险是一个重大关切。这一风险要求对LLMs进行严格的测试与全面评估,以确保其安全、负责任的使用。然而,对LLMs进行广泛测试需要大量计算资源,成本高昂。因此,在测试阶段探索节约成本的策略至关重要,以在全面评估的需求与资源限制之间取得平衡。为此,我们的方法首先将LLM的审核知识迁移至一个小型模型。随后,我们部署两种不同的恶意查询生成策略:一种基于语法树方法,另一种则利用基于LLM的方法。最后,我们的方法引入了一个顺序过滤-测试流程,旨在识别易于引发有害响应的测试用例。本研究评估了DistillSeq在四种LLM上的效能:GPT-3.5、GPT-4.0、Vicuna-13B和Llama-13B。在不使用DistillSeq的情况下,这些LLM上观察到的攻击成功率分别为:GPT-3.5为31.5%,GPT-4.0为21.4%,Vicuna-13B为28.3%,Llama-13B为30.9%。然而,应用DistillSeq后,这些成功率显著提升至58.5%、50.7%、52.5%和54.4%。与未使用DistillSeq的场景相比,攻击成功率平均提升了93.0%。这些发现凸显了DistillSeq在有效降低测试LLMs所需时间和资源投入方面带来的显著增强。