The rapid progress of large language models (LLMs) has laid the foundation for multimodal models. However, visual language models (VLMs) still face heavy computational costs when extended from images to videos due to high frame rates and long durations. Token compression is a promising solution, yet most existing training-free methods cause information loss and performance degradation. To overcome this, we propose \textbf{Memory-Augmented Reinforcement Learning-based Token Compression (MARC)}, which integrates structured retrieval and RL-based distillation. MARC adopts a \textit{retrieve-then-compress} strategy using a \textbf{Visual Memory Retriever (VMR)} to select key clips and a \textbf{Compression Group Relative Policy Optimization (C-GRPO)} framework to distil reasoning ability from a teacher to a student model. Experiments on six video benchmarks show that MARC achieves near-baseline accuracy using only one frame's tokens -- reducing visual tokens by \textbf{95\%}, GPU memory by \textbf{72\%}, and latency by \textbf{23.9\%}. This demonstrates its potential for efficient, real-time video understanding in resource-constrained settings such as video QA, surveillance, and autonomous driving.
翻译:大型语言模型(LLM)的快速发展为多模态模型奠定了基础。然而,视觉语言模型(VLM)从图像扩展到视频时,由于高帧率和长持续时间,仍面临沉重的计算成本。令牌压缩是一种有前景的解决方案,但现有的大多数免训练方法会导致信息丢失和性能下降。为克服此问题,我们提出了**基于记忆增强强化学习的令牌压缩(MARC)**,它集成了结构化检索和基于强化学习的蒸馏。MARC采用一种**检索后压缩**策略,利用**视觉记忆检索器(VMR)** 选择关键片段,并采用**压缩组相对策略优化(C-GRPO)** 框架将推理能力从教师模型蒸馏到学生模型。在六个视频基准测试上的实验表明,MARC仅使用一帧的令牌即可达到接近基线的准确率——将视觉令牌减少**95%**,GPU内存降低**72%**,延迟降低**23.9%**。这证明了其在资源受限场景(如视频问答、监控和自动驾驶)中实现高效实时视频理解的潜力。