With the advancement of large-scale language modeling techniques, large multimodal models combining visual encoders with large language models have demonstrated exceptional performance in various visual tasks. Most of the current large-scale multimodal models achieve this by mapping visual features obtained from the visual encoder into a large language model and using them as inputs alongside text for downstream tasks. Therefore, the number of visual tokens directly affects the training and inference speed of the model. There has been significant work on token pruning for visual transformers, but for large multimodal models, only relying on visual information for token pruning or compression may lead to significant loss of important information. On the other hand, the textual input in the form of a question may contain valuable information that can aid in answering the question, providing additional knowledge to the model. To address the potential oversimplification and excessive pruning that can occur with most purely visual token pruning methods, we propose a text information-guided dynamic visual token recovery mechanism that does not require training. This mechanism leverages the similarity between the question text and visual tokens to recover visually meaningful tokens with important text information while merging other less important tokens. Experimental results demonstrate that our proposed method achieves comparable performance to the original approach while compressing the visual tokens to an average of 10% of the original quantity. Our source code will be made publicly available following acceptance.
翻译:随着大规模语言建模技术的进步,将视觉编码器与大型语言模型相结合的大型多模态模型已在各类视觉任务中展现出卓越性能。当前大多数大规模多模态模型通过将视觉编码器提取的视觉特征映射至大型语言模型,并将其与文本共同作为下游任务的输入来实现这一目标。因此,视觉令牌的数量直接影响模型的训练与推理速度。尽管针对视觉Transformer的令牌剪枝已有大量研究工作,但对于大型多模态模型而言,仅依赖视觉信息进行令牌剪枝或压缩可能导致重要信息的显著丢失。另一方面,以问题形式呈现的文本输入可能包含有助于解答问题的宝贵信息,为模型提供额外知识。为应对大多数纯视觉令牌剪枝方法可能导致的过度简化与过度剪枝问题,本文提出一种无需训练的文本信息引导动态视觉令牌恢复机制。该机制利用问题文本与视觉令牌之间的相似性,在合并其他次要令牌的同时,恢复具有重要文本信息的视觉意义令牌。实验结果表明,所提方法在将视觉令牌压缩至原始数量平均10%的情况下,仍能达到与原始方法相当的性能。我们的源代码将在论文录用后公开。