Large Vision-Language Models (VLMs) have demonstrated remarkable performance across multimodal tasks by integrating vision encoders with large language models (LLMs). However, these models remain vulnerable to adversarial attacks. Among such attacks, Universal Adversarial Perturbations (UAPs) are especially powerful, as a single optimized perturbation can mislead the model across various input images. In this work, we introduce a novel UAP specifically designed for VLMs: the Doubly-Universal Adversarial Perturbation (Doubly-UAP), capable of universally deceiving VLMs across both image and text inputs. To successfully disrupt the vision encoder's fundamental process, we analyze the core components of the attention mechanism. After identifying value vectors in the middle-to-late layers as the most vulnerable, we optimize Doubly-UAP in a label-free manner with a frozen model. Despite being developed as a black-box to the LLM, Doubly-UAP achieves high attack success rates on VLMs, consistently outperforming baseline methods across vision-language tasks. Extensive ablation studies and analyses further demonstrate the robustness of Doubly-UAP and provide insights into how it influences internal attention mechanisms.
翻译:大型视觉-语言模型通过将视觉编码器与大型语言模型相结合,在多模态任务中展现出卓越的性能。然而,这些模型仍易受对抗攻击的影响。在各类攻击中,通用对抗扰动尤为强大,仅需一个优化后的扰动即可使模型在处理不同输入图像时产生误判。本文提出一种专为视觉-语言模型设计的新型通用对抗扰动:双重通用对抗扰动,该扰动能够同时在图像和文本输入上实现对视觉-语言模型的通用欺骗。为有效干扰视觉编码器的核心处理过程,我们分析了注意力机制的关键组成部分。在确定中后层价值向量为最脆弱环节后,我们在模型参数冻结的条件下以无标签方式优化双重通用对抗扰动。尽管该扰动是针对大型语言模型的黑盒方法开发而成,其在视觉-语言模型上仍能实现高攻击成功率,并在各类视觉-语言任务中持续超越基线方法。深入的消融实验与分析进一步证明了双重通用对抗扰动的鲁棒性,并揭示了其如何影响内部注意力机制的作用机理。