In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the foundation model for many downstream tasks, current MLLMs are composed of the well-known Transformer network, which has a less efficient quadratic computation complexity. To improve the efficiency of such basic models, we propose Cobra, a linear computational complexity MLLM. Specifically, Cobra integrates the efficient Mamba language model into the visual modality. Moreover, we explore and study various modal fusion schemes to create an effective multi-modal Mamba. Extensive experiments demonstrate that (1) Cobra achieves extremely competitive performance with current computationally efficient state-of-the-art methods, e.g., LLaVA-Phi, TinyLLaVA, and MobileVLM v2, and has faster speed due to Cobra's linear sequential modeling. (2) Interestingly, the results of closed-set challenging prediction benchmarks show that Cobra performs well in overcoming visual illusions and spatial relationship judgments. (3) Notably, Cobra even achieves comparable performance to LLaVA with about 43% of the number of parameters. We will make all codes of Cobra open-source and hope that the proposed method can facilitate future research on complexity problems in MLLM. Our project page is available at: https://sites.google.com/view/cobravlm.
翻译:近年来,多模态大语言模型(MLLM)在各个领域的应用取得了显著成功。然而,作为众多下游任务的基础模型,当前的多模态大语言模型普遍采用计算复杂度为二次的经典Transformer网络架构,其效率存在局限。为提升此类基础模型的效率,我们提出了Cobra,一种具有线性计算复杂度的多模态大语言模型。具体而言,Cobra将高效的Mamba语言模型集成到视觉模态中。此外,我们探索并研究了多种模态融合方案,以构建一个有效的多模态Mamba模型。大量实验表明:(1)Cobra与当前计算高效的主流方法(如LLaVA-Phi、TinyLLaVA和MobileVLM v2)相比取得了极具竞争力的性能,并且得益于Cobra的线性序列建模能力,其推理速度更快。(2)有趣的是,在闭集挑战性预测基准测试的结果显示,Cobra在克服视觉错觉和空间关系判断方面表现优异。(3)值得注意的是,Cobra仅需约43%的参数数量即可达到与LLaVA相当的性能。我们将开源Cobra的全部代码,并希望所提出的方法能够促进未来关于多模态大语言模型复杂度问题的研究。我们的项目页面位于:https://sites.google.com/view/cobravlm。