With the advancement of RNN models with linear complexity, the quadratic complexity challenge of transformers has the potential to be overcome. Notably, the emerging Mamba-2 has demonstrated competitive performance, bridging the gap between RNN models and transformers. However, due to sequential processing and vanishing gradients, RNN models struggle to capture long-range dependencies, limiting contextual understanding. This results in slow convergence, high resource demands, and poor performance on downstream understanding and complex reasoning tasks. In this work, we present a hybrid model MaTVLM by substituting a portion of the transformer decoder layers in a pre-trained VLM with Mamba-2 layers. Leveraging the inherent relationship between attention and Mamba-2, we initialize Mamba-2 with corresponding attention weights to accelerate convergence. Subsequently, we employ a single-stage distillation process, using the pre-trained VLM as the teacher model to transfer knowledge to the MaTVLM, further enhancing convergence speed and performance. Furthermore, we investigate the impact of differential distillation loss within our training framework. We evaluate the MaTVLM on multiple benchmarks, demonstrating competitive performance against the teacher model and existing VLMs while surpassing both Mamba-based VLMs and models of comparable parameter scales. Remarkably, the MaTVLM achieves up to 3.6x faster inference than the teacher model while reducing GPU memory consumption by 27.5%, all without compromising performance. Code and models are released at http://github.com/hustvl/MaTVLM.
翻译:随着具有线性复杂度的RNN模型的发展,Transformer的二次复杂度挑战有望被克服。值得注意的是,新兴的Mamba-2已展现出具有竞争力的性能,缩小了RNN模型与Transformer之间的差距。然而,由于顺序处理和梯度消失问题,RNN模型难以捕捉长程依赖关系,限制了其上下文理解能力。这导致收敛速度慢、资源需求高,且在下游理解与复杂推理任务上表现不佳。本文提出一种混合模型MaTVLM,其通过在预训练视觉语言模型(VLM)中用Mamba-2层替换部分Transformer解码器层构建而成。利用注意力机制与Mamba-2之间的内在关联,我们使用对应的注意力权重初始化Mamba-2以加速收敛。随后,我们采用单阶段蒸馏过程,以预训练VLM作为教师模型,将知识迁移至MaTVLM,从而进一步提升收敛速度与性能。此外,我们在训练框架内研究了差异化蒸馏损失的影响。我们在多个基准测试上评估MaTVLM,结果表明其性能与教师模型及现有VLM相当,同时超越了基于Mamba的VLM及同等参数规模的模型。值得注意的是,MaTVLM在保持性能不损失的前提下,推理速度较教师模型最高提升3.6倍,GPU内存消耗降低27.5%。代码与模型已发布于 http://github.com/hustvl/MaTVLM。