Visual token compression is widely adopted to improve the inference efficiency of Large Vision-Language Models (LVLMs), enabling their deployment in latency-sensitive and resource-constrained scenarios. However, existing work has mainly focused on efficiency and performance, while the security implications of visual token compression remain largely unexplored. In this work, we first reveal that visual token compression substantially degrades the robustness of LVLMs: models that are robust under uncompressed inference become highly vulnerable once compression is enabled. These vulnerabilities are state-specific; failure modes emerge only in the compressed setting and completely disappear when compression is disabled, making them particularly hidden and difficult to diagnose. By analyzing the key stages of the compression process, we identify instability in token importance ranking as the primary cause of this robustness degradation. Small and imperceptible perturbations can significantly alter token rankings, leading the compression mechanism to mistakenly discard task-critical information and ultimately causing model failure. Motivated by this observation, we propose a Compression-Aware Attack to systematically study and exploit this vulnerability. CAA directly targets the token selection mechanism and induces failures exclusively under compressed inference. We further extend this approach to more realistic black-box settings and introduce Transfer CAA, where neither the target model nor the compression configuration is accessible. We further evaluate potential defenses and find that they provide only limited protection. Extensive experiments across models, datasets, and compression methods show that visual token compression significantly undermines robustness, revealing a previously overlooked efficiency-security trade-off.
翻译:视觉令牌压缩被广泛采用以提升大型视觉语言模型(LVLMs)的推理效率,使其能够部署在延迟敏感和资源受限的场景中。然而,现有工作主要关注效率和性能,而视觉令牌压缩的安全影响在很大程度上尚未得到探索。在本研究中,我们首次揭示视觉令牌压缩会显著降低LVLMs的鲁棒性:在未压缩推理下表现稳健的模型,一旦启用压缩就会变得高度脆弱。这些漏洞具有状态特异性;失效模式仅在压缩设置下出现,并在禁用压缩后完全消失,这使得它们尤其隐蔽且难以诊断。通过分析压缩过程的关键阶段,我们确定了令牌重要性排序的不稳定性是导致鲁棒性下降的主要原因。微小且难以察觉的扰动可以显著改变令牌排序,导致压缩机制错误地丢弃任务关键信息,最终引发模型失效。基于这一观察,我们提出了一种压缩感知攻击(Compression-Aware Attack, CAA)来系统性地研究和利用此漏洞。CAA直接针对令牌选择机制,并仅在压缩推理下诱发失效。我们进一步将此方法扩展到更现实的黑盒设置,并提出了迁移CAA(Transfer CAA),其中目标模型和压缩配置均不可访问。我们还评估了潜在的防御措施,发现它们仅能提供有限的保护。跨模型、数据集和压缩方法的广泛实验表明,视觉令牌压缩显著削弱了鲁棒性,揭示了一个先前被忽视的效率-安全权衡。