Vision-Language Models (VLMs) extend large language models with visual reasoning, but their multimodal design also introduces new, underexplored vulnerabilities. Existing multimodal red-teaming methods largely rely on brittle templates, focus on single-attack settings, and expose only a narrow subset of vulnerabilities. To address these limitations, we introduce VERA-V, a variational inference framework that recasts multimodal jailbreak discovery as learning a joint posterior distribution over paired text-image prompts. This probabilistic view enables the generation of stealthy, coupled adversarial inputs that bypass model guardrails. We train a lightweight attacker to approximate the posterior, allowing efficient sampling of diverse jailbreaks and providing distributional insights into vulnerabilities. VERA-V further integrates three complementary strategies: (i) typography-based text prompts that embed harmful cues, (ii) diffusion-based image synthesis that introduces adversarial signals, and (iii) structured distractors to fragment VLM attention. Experiments on HarmBench and HADES benchmarks show that VERA-V consistently outperforms state-of-the-art baselines on both open-source and frontier VLMs, achieving up to 53.75% higher attack success rate (ASR) over the best baseline on GPT-4o.
翻译:视觉-语言模型(VLMs)将大语言模型扩展至视觉推理领域,但其多模态设计也引入了新的、尚未被充分探索的脆弱性。现有的多模态红队测试方法主要依赖于脆弱的模板,专注于单次攻击场景,且仅暴露了脆弱性的一个狭窄子集。为解决这些局限性,我们提出了VERA-V,一个变分推理框架,它将多模态越狱发现重新定义为学习成对的文本-图像提示的联合后验分布。这种概率视角能够生成隐蔽的、耦合的对抗性输入,从而绕过模型的防护机制。我们训练了一个轻量级的攻击器来近似该后验分布,从而能够高效采样多样化的越狱样本,并提供关于脆弱性的分布性洞见。VERA-V进一步整合了三种互补策略:(i)基于排版的文本提示,嵌入有害线索;(ii)基于扩散的图像合成,引入对抗性信号;(iii)结构化干扰物,以分散VLM的注意力。在HarmBench和HADES基准测试上的实验表明,VERA-V在开源和前沿VLMs上均持续优于最先进的基线方法,在GPT-4o上相比最佳基线实现了高达53.75%的攻击成功率提升。