Large Vision-Language Models (VLMs) have demonstrated remarkable performance across multimodal tasks by integrating vision encoders with large language models (LLMs). However, these models remain vulnerable to adversarial attacks. Among such attacks, Universal Adversarial Perturbations (UAPs) are especially powerful, as a single optimized perturbation can mislead the model across various input images. In this work, we introduce a novel UAP specifically designed for VLMs: the Doubly-Universal Adversarial Perturbation (Doubly-UAP), capable of universally deceiving VLMs across both image and text inputs. To successfully disrupt the vision encoder's fundamental process, we analyze the core components of the attention mechanism. After identifying value vectors in the middle-to-late layers as the most vulnerable, we optimize Doubly-UAP in a label-free manner with a frozen model. Despite being developed as a black-box to the LLM, Doubly-UAP achieves high attack success rates on VLMs, consistently outperforming baseline methods across vision-language tasks. Extensive ablation studies and analyses further demonstrate the robustness of Doubly-UAP and provide insights into how it influences internal attention mechanisms.
翻译:大型视觉-语言模型通过将视觉编码器与大语言模型相结合,在多模态任务中展现出卓越性能。然而,这些模型仍易受对抗攻击的影响。在各类攻击方法中,通用对抗扰动尤为强大,其通过单一优化扰动即可使模型在处理不同输入图像时产生误判。本研究针对视觉-语言模型提出一种新型通用对抗扰动——双重通用对抗扰动,该扰动能够同时对图像与文本输入实现通用性欺骗。为有效破坏视觉编码器的核心处理过程,我们深入分析了注意力机制的关键组成部件。在确定中后层价值向量为最脆弱环节后,我们在模型参数冻结的条件下以无标签方式优化双重通用对抗扰动。尽管该扰动是作为黑盒方法针对大语言模型开发的,其在视觉-语言模型上仍实现了较高的攻击成功率,在各类视觉-语言任务中持续超越基线方法。大量消融实验与分析进一步验证了双重通用对抗扰动的鲁棒性,并揭示了其影响内部注意力机制的作用机理。