While excellent in transfer learning, Vision-Language models (VLMs) come with high computational costs due to their large number of parameters. To address this issue, removing parameters via model pruning is a viable solution. However, existing techniques for VLMs are task-specific, and thus require pruning the network from scratch for each new task of interest. In this work, we explore a new direction: Task-Agnostic Vision-Language Pruning (TA-VLP). Given a pretrained VLM, the goal is to find a unique pruned counterpart transferable to multiple unknown downstream tasks. In this challenging setting, the transferable representations already encoded in the pretrained model are a key aspect to preserve. Thus, we propose Multimodal Flow Pruning (MULTIFLOW), a first, gradient-free, pruning framework for TA-VLP where: (i) the importance of a parameter is expressed in terms of its magnitude and its information flow, by incorporating the saliency of the neurons it connects; and (ii) pruning is driven by the emergent (multimodal) distribution of the VLM parameters after pretraining. We benchmark eight state-of-the-art pruning algorithms in the context of TA-VLP, experimenting with two VLMs, three vision-language tasks, and three pruning ratios. Our experimental results show that MULTIFLOW outperforms recent sophisticated, combinatorial competitors in the vast majority of the cases, paving the way towards addressing TA-VLP. The code is publicly available at https://github.com/FarinaMatteo/multiflow.
翻译:尽管视觉-语言模型(VLM)在迁移学习方面表现卓越,但其庞大的参数量导致计算成本高昂。为应对这一挑战,通过模型剪枝移除参数是一种可行方案。然而,现有针对VLM的剪枝技术均面向特定任务,即需针对每个新任务从头开始剪枝网络。本研究探索了一个新方向:任务无关的视觉-语言剪枝(TA-VLP)。给定一个预训练VLM,目标是寻找一个能够迁移至多个未知下游任务的唯一剪枝后模型。在这一具有挑战性的设定中,预训练模型中已编码的可迁移表征是需保留的关键要素。为此,我们提出多模态流剪枝(MULTIFLOW)——首个用于TA-VLP的无梯度剪枝框架,其中:(i)参数的重要性通过其幅值与信息流共同表征,并融入其所连接神经元的显著性;(ii)剪枝过程由预训练后VLM参数涌现出的(多模态)分布驱动。我们在TA-VLP场景下对八种最新剪枝算法进行基准测试,实验涉及两种VLM、三个视觉-语言任务及三种剪枝比例。结果表明,在绝大多数情况下,MULTIFLOW的性能优于近期复杂的组合型竞争对手,为攻克TA-VLP问题铺平道路。代码已开源至 https://github.com/FarinaMatteo/multiflow。