In recent years, the rapid development of large language models (LLMs) has brought new vitality to the various domains and generated substantial social and economic benefits. However, the swift advancement of LLMs has introduced new security vulnerabilities. Jailbreak, a form of attack that induces LLMs to output harmful content through carefully crafted prompts, poses a challenge to the safe and trustworthy development of LLMs. Previous jailbreak attack methods primarily exploited the internal capabilities of the model. Among them, one category leverages the model's implicit capabilities for jailbreak attacks, where the attacker is unaware of the exact reasons for the attack's success. The other category utilizes the model's explicit capabilities for jailbreak attacks, where the attacker understands the reasons for the attack's success. For example, these attacks exploit the model's abilities in coding, contextual learning, or understanding ASCII characters. However, these earlier jailbreak attacks have certain limitations, as they only exploit the inherent capabilities of the model. In this paper, we propose a novel jailbreak method, SQL Injection Jailbreak (SIJ), which utilizes the construction of input prompts by LLMs to inject jailbreak information into user prompts, enabling successful jailbreak of the LLMs. Our SIJ method achieves nearly 100\% attack success rates on five well-known open-source LLMs in the context of AdvBench, while incurring lower time costs compared to previous methods. More importantly, SIJ reveals a new vulnerability in LLMs that urgently needs to be addressed. To this end, we propose a defense method called Self-Reminder-Key and demonstrate its effectiveness through experiments. Our code is available at \href{https://github.com/weiyezhimeng/SQL-Injection-Jailbreak}{https://github.com/weiyezhimeng/SQL-Injection-Jailbreak}.
翻译:近年来,大语言模型(LLMs)的快速发展为各领域注入了新的活力,并产生了显著的社会经济效益。然而,LLMs的快速演进也引入了新的安全漏洞。越狱攻击作为一种通过精心设计的提示诱导LLMs输出有害内容的攻击形式,对LLMs的安全可信发展构成了挑战。先前的越狱攻击方法主要利用模型的内在能力。其中一类攻击利用模型的隐式能力进行越狱,攻击者并不清楚攻击成功的确切原因;另一类则利用模型的显式能力进行越狱,攻击者能够理解攻击成功的原因,例如利用模型的编码能力、上下文学习能力或对ASCII字符的理解能力。然而,这些早期越狱攻击存在一定局限性,因为它们仅利用了模型的固有能力。本文提出一种新型越狱方法——SQL注入越狱(SIJ),该方法通过利用LLMs构建输入提示的机制,将越狱信息注入用户提示中,从而成功实现对LLMs的越狱。我们的SIJ方法在AdvBench基准测试中,对五个知名开源LLMs实现了接近100%的攻击成功率,且相较于先前方法具有更低的时间成本。更重要的是,SIJ揭示了LLMs中一个亟待解决的新型漏洞。为此,我们提出一种名为“自我提醒密钥”的防御方法,并通过实验验证了其有效性。代码已发布于\href{https://github.com/weiyezhimeng/SQL-Injection-Jailbreak}{https://github.com/weiyezhimeng/SQL-Injection-Jailbreak}。