Video-based multimodal large language models (V-MLLMs) have shown vulnerability to adversarial examples in video-text multimodal tasks. However, the transferability of adversarial videos to unseen models - a common and practical real-world scenario - remains unexplored. In this paper, we pioneer an investigation into the transferability of adversarial video samples across V-MLLMs. We find that existing adversarial attack methods face significant limitations when applied in black-box settings for V-MLLMs, which we attribute to the following shortcomings: (1) lacking generalization in perturbing video features, (2) focusing only on sparse key-frames, and (3) failing to integrate multimodal information. To address these limitations and deepen the understanding of V-MLLM vulnerabilities in black-box scenarios, we introduce the Image-to-Video MLLM (I2V-MLLM) attack. In I2V-MLLM, we utilize an image-based multimodal large language model (I-MLLM) as a surrogate model to craft adversarial video samples. Multimodal interactions and spatiotemporal information are integrated to disrupt video representations within the latent space, improving adversarial transferability. Additionally, a perturbation propagation technique is introduced to handle different unknown frame sampling strategies. Experimental results demonstrate that our method can generate adversarial examples that exhibit strong transferability across different V-MLLMs on multiple video-text multimodal tasks. Compared to white-box attacks on these models, our black-box attacks (using BLIP-2 as a surrogate model) achieve competitive performance, with average attack success rate (AASR) of 57.98% on MSVD-QA and 58.26% on MSRVTT-QA for Zero-Shot VideoQA tasks, respectively.
翻译:基于视频的多模态大语言模型在视频-文本多模态任务中已显示出对对抗样本的脆弱性。然而,对抗视频在未见模型上的可迁移性——一种常见且实际的现实场景——仍未得到探索。本文率先研究了对抗视频样本在V-MLLMs间的可迁移性。我们发现,现有的对抗攻击方法在应用于V-MLLMs的黑盒设置时面临显著局限,我们将其归因于以下不足:(1) 在扰动视频特征方面缺乏泛化性,(2) 仅关注稀疏关键帧,以及(3) 未能整合多模态信息。为应对这些局限并深化对黑盒场景下V-MLLM脆弱性的理解,我们提出了图像到视频MLLM攻击。在I2V-MLLM中,我们利用一个基于图像的多模态大语言模型作为代理模型来制作对抗视频样本。通过整合多模态交互与时空信息来扰乱潜在空间内的视频表征,从而提升对抗可迁移性。此外,引入了一种扰动传播技术以处理不同的未知帧采样策略。实验结果表明,我们的方法能够生成在多个视频-文本多模态任务上对不同V-MLLMs均表现出强可迁移性的对抗样本。与对这些模型的白盒攻击相比,我们的黑盒攻击(使用BLIP-2作为代理模型)取得了具有竞争力的性能,在Zero-Shot VideoQA任务上,于MSVD-QA和MSRVTT-QA数据集上的平均攻击成功率分别为57.98%和58.26%。