Large Language Models (LLMs) have shown impressive proficiency across a range of natural language processing tasks yet remain vulnerable to adversarial prompts, known as jailbreak attacks, carefully designed to elicit harmful responses from LLMs. Traditional methods rely on manual heuristics, which suffer from limited generalizability. While being automatic, optimization-based attacks often produce unnatural jailbreak prompts that are easy to detect by safety filters or require high computational overhead due to discrete token optimization. Witnessing the limitations of existing jailbreak methods, we introduce Generative Adversarial Suffix Prompter (GASP), a novel framework that combines human-readable prompt generation with Latent Bayesian Optimization (LBO) to improve adversarial suffix creation in a fully black-box setting. GASP leverages LBO to craft adversarial suffixes by efficiently exploring continuous embedding spaces, gradually optimizing the model to improve attack efficacy while balancing prompt coherence through a targeted iterative refinement procedure. Our experiments show that GASP can generate natural jailbreak prompts, significantly improving attack success rates, reducing training times, and accelerating inference speed, thus making it an efficient and scalable solution for red-teaming LLMs.
翻译:大语言模型(LLM)在众多自然语言处理任务中展现出卓越能力,但仍易受精心设计的对抗性提示(即越狱攻击)影响,从而诱导模型产生有害回复。传统方法依赖人工启发式规则,其泛化能力有限。基于优化的攻击方法虽能自动生成对抗提示,但常产生不自然的越狱提示,易被安全过滤器检测,或因离散令牌优化导致计算开销巨大。针对现有越狱方法的局限性,本文提出生成式对抗后缀提示器(GASP)——一种融合可读提示生成与潜在贝叶斯优化(LBO)的新型框架,旨在全黑盒环境下改进对抗后缀的生成。GASP通过LBO在连续嵌入空间中高效探索,逐步优化模型以提升攻击效能,同时通过定向迭代优化机制平衡提示连贯性。实验表明,GASP能够生成自然的越狱提示,显著提高攻击成功率、缩短训练时间并加速推理速度,从而为LLM红队测试提供高效可扩展的解决方案。