Vision-language models (VLMs) face significant computational inefficiencies caused by excessive generation of visual tokens. While prior work shows that a large fraction of visual tokens are redundant, existing compression methods struggle to balance importance preservation and information diversity. To address this, we propose PruneSID, a training-free Synergistic Importance-Diversity approach featuring a two-stage pipeline: (1) Principal Semantic Components Analysis (PSCA) for clustering tokens into semantically coherent groups, ensuring comprehensive concept coverage, and (2) Intra-group Non-Maximum Suppression (NMS) for pruning redundant tokens while preserving key representative tokens within each group. Additionally, PruneSID incorporates an information-aware dynamic compression ratio mechanism that optimizes token compression rates based on image complexity, enabling more effective average information preservation across diverse scenes. Extensive experiments demonstrate state-of-the-art performance, achieving 96.3% accuracy on LLaVA-1.5 with only 11.1% token retention, and 92.8% accuracy at extreme compression rates (5.6%) on LLaVA-NeXT, outperforming prior methods by 2.5% with 7.8 $\times$ faster prefilling speed compared to the original model. Our framework generalizes across diverse VLMs and both image and video modalities, showcasing strong cross-modal versatility. Code is available at https://github.com/ZhengyaoFang/PruneSID.
翻译:视觉语言模型(VLMs)因生成过多视觉令牌而面临显著的计算效率低下问题。先前研究表明,大部分视觉令牌是冗余的,但现有压缩方法难以在重要性保留与信息多样性之间取得平衡。为此,我们提出PruneSID,一种无需训练的协同重要性-多样性方法,其采用两阶段流程:(1)主语义成分分析(PSCA),用于将令牌聚类成语义连贯的组,确保全面的概念覆盖;(2)组内非极大值抑制(NMS),用于剪除冗余令牌,同时保留每组内的关键代表性令牌。此外,PruneSID引入了一种信息感知的动态压缩比机制,该机制根据图像复杂度优化令牌压缩率,从而能够在不同场景下实现更有效的平均信息保留。大量实验证明了其最先进的性能:在LLaVA-1.5上仅保留11.1%的令牌即可达到96.3%的准确率;在LLaVA-NeXT上,即使在极端压缩率(5.6%)下也能达到92.8%的准确率,优于先前方法2.5%,且预填充速度比原始模型快7.8倍。我们的框架可泛化至多种VLMs以及图像和视频模态,展现出强大的跨模态通用性。代码发布于 https://github.com/ZhengyaoFang/PruneSID。