Vision-language models (VLMs) typically encode substantially more visual tokens than text tokens, resulting in significant token redundancy. Pruning uninformative visual tokens is therefore crucial for improving computational efficiency, and language-to-vision attention has become a widely used importance criterion for this purpose. However, we find that attention in VLMs is systematically biased. It disproportionately favors tokens appearing later in the sequence, manifesting as over-attention to lower image regions, and assigns inflated scores to semantically empty padding tokens. These behaviors stem from intrinsic recency bias and attention sink effects inherited from large language models (LLMs), and they distort attention-based pruning by preserving irrelevant visual content. To derive a pruning criterion better aligned with semantic relevance, we introduce two lightweight yet effective debiasing techniques that restore the reliability of attention. The first compensates for positional distortions by removing recency-induced attention trends, producing a content-aware and position-agnostic importance measure. The second suppresses attention sink effects by eliminating spurious attention on padding tokens. Our method is model-agnostic, pruning-method-agnostic, and task-agnostic, enabling plug-and-play integration with existing VLM pruning models. Despite its simplicity, our approach consistently delivers strong performance gains. We evaluate our method on ten vision-language benchmarks spanning both image-based and video-based tasks, in comparison with seven state-of-the-art visual token pruning methods and across two representative VLM architectures. Our method achieves substantial performance gains, demonstrating strong effectiveness and generalizability. Our code is available at https://github.com/intcomp/attention-bias.
翻译:视觉语言模型(VLM)通常编码的视觉令牌数量远多于文本令牌,导致显著的令牌冗余。因此,剪枝无信息的视觉令牌对于提升计算效率至关重要,而语言到视觉的注意力已成为实现此目的广泛使用的重要性准则。然而,我们发现VLM中的注意力存在系统性偏差。它不成比例地偏向序列中靠后出现的令牌,表现为对图像下部区域的过度关注,并赋予语义为空的填充令牌过高的分数。这些行为源于从大型语言模型(LLM)继承的内在近因偏差和注意力汇聚效应,它们扭曲了基于注意力的剪枝,保留了不相关的视觉内容。为了推导出与语义相关性更一致的重要性准则,我们引入了两种轻量级但有效的去偏技术,以恢复注意力的可靠性。第一种技术通过消除近因引起的注意力趋势来补偿位置扭曲,产生一种内容感知且位置无关的重要性度量。第二种技术通过抑制对填充令牌的虚假注意力来压制注意力汇聚效应。我们的方法具有模型无关性、剪枝方法无关性和任务无关性,能够与现有的VLM剪枝模型实现即插即用集成。尽管方法简单,我们的方法始终能带来显著的性能提升。我们在十个涵盖图像和视频任务的视觉语言基准上评估了我们的方法,并与七种最先进的视觉令牌剪枝方法进行了比较,同时覆盖了两种代表性的VLM架构。我们的方法实现了显著的性能提升,证明了其强大的有效性和泛化能力。我们的代码可在 https://github.com/intcomp/attention-bias 获取。