Video large language models (VLLMs) have significantly advanced recently in processing complex video content, yet their inference efficiency remains constrained because of the high computational cost stemming from the thousands of visual tokens generated from the video inputs. We empirically observe that, unlike single image inputs, VLLMs typically attend visual tokens from different frames at different decoding iterations, making a one-shot pruning strategy prone to removing important tokens by mistake. Motivated by this, we present DyCoke, a training-free token compression method to optimize token representation and accelerate VLLMs. DyCoke incorporates a plug-and-play temporal compression module to minimize temporal redundancy by merging redundant tokens across frames, and applies dynamic KV cache reduction to prune spatially redundant tokens selectively. It ensures high-quality inference by dynamically retaining the critical tokens at each decoding step. Extensive experimental results demonstrate that DyCoke can outperform the prior SoTA counterparts, achieving 1.5X inference speedup, 1.4X memory reduction against the baseline VLLM, while still improving the performance, with no training.
翻译:视频大语言模型(VLLMs)近年来在处理复杂视频内容方面取得了显著进展,然而其推理效率仍然受限,这源于视频输入产生的数千个视觉令牌所带来的高计算成本。我们通过实证观察到,与单张图像输入不同,VLLMs 通常在不同的解码迭代中关注来自不同帧的视觉令牌,这使得一次性剪枝策略容易错误地移除重要令牌。受此启发,我们提出了 DyCoke,一种无需训练即可优化令牌表示并加速 VLLMs 的令牌压缩方法。DyCoke 包含一个即插即用的时序压缩模块,通过跨帧合并冗余令牌来最小化时序冗余,并应用动态 KV 缓存缩减来有选择地剪枝空间冗余令牌。该方法通过在每一步解码中动态保留关键令牌,确保了高质量的推理。大量实验结果表明,DyCoke 能够超越先前的 SoTA 方法,在无需训练的情况下,相比基线 VLLM 实现了 1.5 倍的推理加速和 1.4 倍的内存减少,同时性能仍有提升。