Recent efforts to accelerate inference in Multimodal Large Language Models (MLLMs) have largely focused on visual token compression. The effectiveness of these methods is commonly evaluated by measuring the accuracy drop on existing MLLM benchmarks before and after compression. However, these benchmarks are originally designed to assess general perception and reasoning abilities, rather than the specific challenges posed by visual token compression, leading to a fundamental task mismatch. In this work, we uncover a counterintuitive yet consistent phenomenon: simple image downsampling outperforms many advanced visual token compression methods across multiple widely used benchmarks. Through a comprehensive empirical study spanning eight popular benchmarks and multiple state-of-the-art compression techniques, we show that (i) current benchmarks contain substantial noise (task-irrelevant samples) for evaluating visual token compression, and (ii) downsampling can act as an effective data filter that distinguishes between simple and difficult samples with respect to compression sensitivity. Motivated by these findings, we propose VTC-Bench, an evaluation framework that explicitly leverages downsampling as a discriminator to denoise existing benchmarks, enabling a fairer and more meaningful additional assessment of visual token compression methods.
翻译:近期加速多模态大语言模型推理的研究主要集中于视觉令牌压缩。这些方法的有效性通常通过比较压缩前后在现有MLLM基准测试上的准确率下降来评估。然而,这些基准测试最初旨在评估通用感知与推理能力,而非针对视觉令牌压缩带来的特定挑战,这导致了根本性的任务错配。本研究发现了一个反直觉但一致的现象:在多个广泛使用的基准测试中,简单的图像下采样方法优于许多先进的视觉令牌压缩方法。通过对八个流行基准测试及多种前沿压缩技术的全面实证研究,我们证明:(1)当前基准测试在评估视觉令牌压缩时包含大量噪声(与任务无关的样本);(2)下采样可作为有效的数据过滤器,根据压缩敏感性区分简单样本与困难样本。基于这些发现,我们提出了VTC-Bench评估框架,该框架显式利用下采样作为判别器对现有基准测试进行去噪,从而为视觉令牌压缩方法提供更公平、更具意义的补充评估。