The Multimodal Large Language Models (MLLMs) have activated the capabilitiesof Large Language Models (LLMs) in solving visual-language tasks by integratingvisual information. The prevailing approach in existing MLLMs involvesemploying an image encoder to extract visual features, converting thesefeatures into visual tokens via an adapter, and then integrating them with theprompt into the LLM. However, because the process of image encoding isprompt-agnostic, the extracted visual features only provide a coarsedescription of the image, impossible to focus on the requirements of theprompt. On one hand, it is easy for image features to lack information aboutthe prompt-specified objects, resulting in unsatisfactory responses. On theother hand, the visual features contain a large amount of irrelevantinformation, which not only increases the burden on memory but also worsens thegeneration effectiveness. To address the aforementioned issues, we propose\textbf{PIP-MM}, a framework that \textbf{P}re-\textbf{I}ntegrates\textbf{P}rompt information into the visual encoding process using existingmodules of MLLMs. Specifically, We utilize the frozen LLM in the MLLM tovectorize the input prompt, which summarizes the requirements of the prompt.Then, we input the prompt vector into our trained Multi-Layer Perceptron (MLP)to align with the visual input requirements, and subsequently replace the classembedding in the image encoder. Since our model only requires adding atrainable MLP, it can be applied to any MLLM. To validate the effectiveness ofPIP-MM, we conducted experiments on multiple benchmarks. Automated evaluationmetrics and manual assessments demonstrate the strong performance of PIP-MM.Particularly noteworthy is that our model maintains excellent generationresults even when half of the visual tokens are reduced.
翻译:多模态大语言模型(MLLMs)通过整合视觉信息,激活了大语言模型(LLMs)解决视觉-语言任务的能力。现有MLLMs的主流方法采用图像编码器提取视觉特征,通过适配器将这些特征转换为视觉标记,随后将其与提示词共同输入LLM。然而,由于图像编码过程与提示词无关,提取的视觉特征仅能提供图像的粗略描述,无法聚焦于提示词的具体需求。一方面,图像特征容易缺失提示词指定的对象信息,导致生成回复不尽人意;另一方面,视觉特征包含大量无关信息,不仅增加内存负担,还会降低生成效果。为解决上述问题,我们提出\textbf{PIP-MM}框架,该框架利用MLLMs的现有模块,将\textbf{P}提示词信息\textbf{I}预集成\textbf{P}至视觉编码过程。具体而言,我们利用MLLM中冻结的LLM对输入提示词进行向量化,以概括提示词的核心需求。随后,将提示词向量输入我们训练的多层感知机(MLP)以对齐视觉输入要求,进而替换图像编码器中的类别嵌入向量。由于本模型仅需添加一个可训练的MLP,因此可应用于任意MLLM架构。为验证PIP-MM的有效性,我们在多个基准测试上进行了实验。自动化评估指标与人工评估结果均表明PIP-MM具有卓越性能。尤为值得注意的是,即使在视觉标记数量减半的情况下,我们的模型仍能保持优异的生成效果。