Jailbreak attacks on large language models (LLMs) involve inducing these models to generate harmful content that violates ethics or laws, posing a significant threat to LLM security. Current jailbreak attacks face two main challenges: low success rates due to defensive measures and high resource requirements for crafting specific prompts. This paper introduces Virtual Context, which leverages special tokens, previously overlooked in LLM security, to improve jailbreak attacks. Virtual Context addresses these challenges by significantly increasing the success rates of existing jailbreak methods and requiring minimal background knowledge about the target model, thus enhancing effectiveness in black-box settings without additional overhead. Comprehensive evaluations show that Virtual Context-assisted jailbreak attacks can improve the success rates of four widely used jailbreak methods by approximately 40% across various LLMs. Additionally, applying Virtual Context to original malicious behaviors still achieves a notable jailbreak effect. In summary, our research highlights the potential of special tokens in jailbreak attacks and recommends including this threat in red-teaming testing to comprehensively enhance LLM security.
翻译:大型语言模型(LLM)的越狱攻击旨在诱导模型生成违反伦理或法律的有害内容,对LLM安全构成重大威胁。当前越狱攻击面临两大挑战:因防御措施导致的低成功率,以及构建特定提示所需的高资源成本。本文提出虚拟上下文方法,利用先前在LLM安全研究中被忽视的特殊令牌来改进越狱攻击。该方法通过显著提升现有越狱方法的成功率,并仅需对目标模型的最低限度背景知识,有效应对上述挑战,从而在无额外开销的黑盒设置中增强攻击效能。综合评估表明,虚拟上下文辅助的越狱攻击可将四种广泛使用的越狱方法在各类LLM上的成功率提升约40%。此外,将虚拟上下文应用于原始恶意行为仍能产生显著的越狱效果。总之,本研究揭示了特殊令牌在越狱攻击中的潜在威胁,建议将此类威胁纳入红队测试以全面提升LLM安全性。