The rapid development of Large Language Models (LLMs) has brought remarkable generative capabilities across diverse tasks. However, despite the impressive achievements, these models still have numerous security vulnerabilities, particularly when faced with jailbreak attacks. Therefore, by investigating jailbreak attacks, we can uncover hidden weaknesses in LLMs and guide us in developing more robust defense mechanisms to fortify their security. In this paper, we further explore the boundary of jailbreak attacks on LLMs and propose Analyzing-based Jailbreak (ABJ). This effective jailbreak attack method takes advantage of LLMs' growing analyzing and reasoning capability and reveals their underlying vulnerabilities when facing analysis-based tasks. We conduct a detailed evaluation of ABJ across various open-source and closed-source LLMs, which achieves 94.8% Attack Success Rate (ASR) and 1.06 Attack Efficiency (AE) on GPT-4-turbo-0409, demonstrating state-of-the-art attack effectiveness and efficiency. Our research highlights the importance of prioritizing and enhancing the safety of LLMs to mitigate the risks of misuse.The code is publicly available at https://github.com/theshi-1128/ABJ-Attack.
翻译:大语言模型(LLMs)的快速发展使其在多样化任务中展现出卓越的生成能力。然而,尽管取得了令人瞩目的成就,这些模型仍存在诸多安全漏洞,尤其在面对越狱攻击时表现脆弱。因此,通过研究越狱攻击,我们能够揭示大语言模型中隐藏的缺陷,并指导我们开发更鲁棒的防御机制以增强其安全性。本文进一步探索了大语言模型越狱攻击的边界,提出了基于分析的越狱攻击方法。这种高效的越狱攻击方法利用了大语言模型日益增长的分析与推理能力,揭示了其在面对基于分析的任务时所存在的深层脆弱性。我们在多种开源与闭源大语言模型上对ABJ进行了详细评估,其在GPT-4-turbo-0409上实现了94.8%的攻击成功率与1.06的攻击效率,展现了当前最优的攻击效能与效率。本研究强调了优先提升大语言模型安全性的重要性,以降低其被滥用的风险。相关代码已公开于https://github.com/theshi-1128/ABJ-Attack。