The rapid development of Large Language Models (LLMs) has brought remarkable generative capabilities across diverse tasks. However, despite the impressive achievements, these LLMs still have numerous inherent vulnerabilities, particularly when faced with jailbreak attacks. By investigating jailbreak attacks, we can uncover hidden weaknesses in LLMs and inform the development of more robust defense mechanisms to fortify their security. In this paper, we further explore the boundary of jailbreak attacks on LLMs and propose Analyzing-based Jailbreak (ABJ). This effective jailbreak attack method takes advantage of LLMs' growing analyzing and reasoning capability and reveals their underlying vulnerabilities when facing analyzing-based tasks. We conduct a detailed evaluation of ABJ across various open-source and closed-source LLMs, which achieves 94.8% attack success rate (ASR) and 1.06 attack efficiency (AE) on GPT-4-turbo-0409, demonstrating state-of-the-art attack effectiveness and efficiency. Our research highlights the importance of prioritizing and enhancing the safety of LLMs to mitigate the risks of misuse. The code is publicly available at hhttps://github.com/theshi-1128/ABJ-Attack. Warning: This paper contains examples of LLMs that might be offensive or harmful.
翻译:大型语言模型(LLMs)的快速发展使其在多样化任务中展现出卓越的生成能力。然而,尽管取得了令人瞩目的成就,这些LLMs仍存在诸多固有脆弱性,尤其在面临越狱攻击时更为明显。通过研究越狱攻击,我们能够揭示LLMs的潜在缺陷,并为开发更强大的防御机制以增强其安全性提供依据。本文进一步探索了针对LLMs的越狱攻击边界,并提出基于分析的越狱攻击方法。这种高效的越狱攻击手段利用LLMs日益增长的分析与推理能力,揭示了其在面对分析型任务时的深层脆弱性。我们在多种开源与闭源LLMs上对ABJ进行了系统评估,该方法在GPT-4-turbo-0409上实现了94.8%的攻击成功率与1.06的攻击效率,展现出当前最优的攻击效能与效率。本研究强调了优先提升LLMs安全性的重要意义,以降低其被滥用的风险。相关代码已公开于hhttps://github.com/theshi-1128/ABJ-Attack。警告:本文包含可能具有冒犯性或危害性的LLMs输出示例。