In the realm of large vision-language models (LVLMs), adversarial jailbreak attacks serve as a red-teaming approach to identify safety vulnerabilities of these models and their associated defense mechanisms. However, we identify a critical limitation: not every adversarial optimization step leads to a positive outcome, and indiscriminately accepting optimization results at each step may reduce the overall attack success rate. To address this challenge, we introduce HKVE (Hierarchical Key-Value Equalization), an innovative jailbreaking framework that selectively accepts gradient optimization results based on the distribution of attention scores across different layers, ensuring that every optimization step positively contributes to the attack. Extensive experiments demonstrate HKVE's significant effectiveness, achieving attack success rates of 75.08% on MiniGPT4, 85.84% on LLaVA and 81.00% on Qwen-VL, substantially outperforming existing methods by margins of 20.43\%, 21.01\% and 26.43\% respectively. Furthermore, making every step effective not only leads to an increase in attack success rate but also allows for a reduction in the number of iterations, thereby lowering computational costs. Warning: This paper contains potentially harmful example data.
翻译:在大型视觉语言模型领域,对抗性越狱攻击作为一种红队测试方法,旨在识别这些模型及其相关防御机制的安全漏洞。然而,我们发现一个关键局限:并非每一步对抗性优化都会产生积极结果,不加区分地接受每一步的优化结果可能会降低整体攻击成功率。为应对这一挑战,我们提出了HKVE(分层键值均衡),这是一种创新的越狱框架,它根据不同层间注意力分数的分布有选择地接受梯度优化结果,确保每一步优化都对攻击产生积极贡献。大量实验证明HKVE具有显著的有效性,在MiniGPT4、LLaVA和Qwen-VL上分别实现了75.08%、85.84%和81.00%的攻击成功率,分别以20.43%、21.01%和26.43%的幅度大幅超越现有方法。此外,使每一步有效不仅能提高攻击成功率,还能减少迭代次数,从而降低计算成本。警告:本文包含可能有害的示例数据。