Token pruning is essential for enhancing the computational efficiency of vision-language models (VLMs), particularly for video-based tasks where temporal redundancy is prevalent. Prior approaches typically prune tokens either (1) within the vision transformer (ViT) exclusively for unimodal perception tasks such as action recognition and object segmentation, without adapting to downstream vision-language tasks; or (2) only within the LLM while leaving the ViT output intact, often requiring complex text-conditioned token selection mechanisms. In this paper, we introduce Spatio-Temporal Token Scoring (STTS), a simple and lightweight module that prunes vision tokens across both the ViT and the LLM without text conditioning or token merging, and is fully compatible with end-to-end training. By learning how to score temporally via an auxiliary loss and spatially via LLM downstream gradients, aided by our efficient packing algorithm, STTS prunes 50% of vision tokens throughout the entire architecture, resulting in a 62% improvement in efficiency during both training and inference with only a 0.7% drop in average performance across 13 short and long video QA tasks. Efficiency gains increase with more sampled frames per video. Applying test-time scaling for long-video QA further yields performance gains of 0.5-1% compared to the baseline. Overall, STTS represents a novel, simple yet effective technique for unified, architecture-wide vision token pruning.
翻译:令牌剪枝对于提升视觉语言模型(VLM)的计算效率至关重要,尤其是在时间冗余普遍存在的视频任务中。以往方法通常仅在视觉变换器(ViT)内部对令牌进行剪枝,用于动作识别、目标分割等单模态感知任务,且无法适配下游视觉语言任务;或者仅在大型语言模型(LLM)中进行剪枝而保留ViT输出不变,且通常需要复杂的文本条件令牌选择机制。本文提出时空令牌评分(STTS)——一种简洁轻量的模块,可在无需文本条件或令牌合并的情况下,跨ViT和LLM对视觉令牌进行剪枝,并完全兼容端到端训练。通过辅助损失学习时间维度评分,并借助LLM下游梯度学习空间维度评分,结合高效打包算法,STTS可在整个架构中剪除50%的视觉令牌,在13个长短视频问答任务上仅牺牲0.7%的平均性能,却使训练与推理效率提升62%。随着每个视频采样帧数的增加,效率提升更为显著。在长视频问答中应用测试时扩展,可进一步获得0.5%-1%的性能提升。总体而言,STTS为统一的架构级视觉令牌剪枝提供了一种新颖、简洁而有效的技术方案。