Vision token pruning has proven to be an effective acceleration technique for the efficient Vision Language Model (VLM). However, existing pruning methods demonstrate excellent performance preservation in visual question answering (VQA) and suffer substantial degradation on visual grounding (VG) tasks. Our analysis of the VLM's processing pipeline reveals that strategies utilizing global semantic similarity and attention scores lose the global spatial reference frame, which is derived from the interactions of tokens' positional information. Motivated by these findings, we propose $\text{Nüwa}$, a two-stage token pruning framework that enables efficient feature aggregation while maintaining spatial integrity. In the first stage, after the vision encoder, we apply three operations, namely separation, alignment, and aggregation, which are inspired by swarm intelligence algorithms to retain information-rich global spatial anchors. In the second stage, within the LLM, we perform text-guided pruning to retain task-relevant visual tokens. Extensive experiments demonstrate that $\text{Nüwa}$ achieves SOTA performance on multiple VQA benchmarks (from 94% to 95%) and yields substantial improvements on visual grounding tasks (from 7% to 47%).
翻译:视觉令牌剪枝已被证明是实现高效视觉语言模型(VLM)的有效加速技术。然而,现有剪枝方法在视觉问答(VQA)任务上表现出优异的性能保持,却在视觉定位(VG)任务上遭受显著性能下降。我们对VLM处理流程的分析表明,利用全局语义相似性和注意力分数的策略丢失了全局空间参考框架,该框架源自令牌位置信息之间的相互作用。基于这些发现,我们提出了$\text{Nüwa}$,一个两阶段令牌剪枝框架,能够在保持空间完整性的同时实现高效特征聚合。在第一阶段,视觉编码器之后,我们应用分离、对齐和聚合三个操作,这些操作受群体智能算法启发,旨在保留信息丰富的全局空间锚点。在第二阶段,在大语言模型(LLM)内部,我们执行文本引导的剪枝以保留任务相关的视觉令牌。大量实验表明,$\text{Nüwa}$在多个VQA基准测试中实现了SOTA性能(从94%提升至95%),并在视觉定位任务上取得了显著改进(从7%提升至47%)。