Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. Compared to LLaVA-1.5, VisualRWKV has a speed advantage of 3.98 times and can save 54% of GPU memory when reaching an inference length of 24K tokens. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: see https://github.com/howard-hou/VisualRWKV.
翻译:视觉语言模型(VLMs)随着大语言模型的成功而迅速发展。然而,目前鲜有尝试将高效的线性循环神经网络(RNNs)架构融入VLMs。在本研究中,我们提出了VisualRWKV,这是首个将线性RNN模型应用于多模态学习任务的工作,它利用了预训练的RWKV语言模型。我们提出了一种数据依赖的循环机制和三明治提示来增强模型的建模能力,同时采用了一种二维图像扫描机制以丰富对视觉序列的处理。大量实验表明,VisualRWKV在多个基准测试上,与基于Transformer的模型(如LLaVA-1.5)相比,取得了具有竞争力的性能。与LLaVA-1.5相比,VisualRWKV在推理长度达到24K个标记时,具有3.98倍的速度优势,并能节省54%的GPU内存。为了促进进一步的研究和分析,我们已将模型检查点及相关代码在以下GitHub仓库中公开:详见 https://github.com/howard-hou/VisualRWKV。