This work presents ParGo, a novel Partial-Global projector designed to connect the vision and language modalities for Multimodal Large Language Models (MLLMs). Unlike previous works that rely on global attention-based projectors, our ParGo bridges the representation gap between the separately pre-trained vision encoders and the LLMs by integrating global and partial views, which alleviates the overemphasis on prominent regions. To facilitate the effective training of ParGo, we collect a large-scale detail-captioned image-text dataset named ParGoCap-1M-PT, consisting of 1 million images paired with high-quality captions. Extensive experiments on several MLLM benchmarks demonstrate the effectiveness of our ParGo, highlighting its superiority in aligning vision and language modalities. Compared to conventional Q-Former projector, our ParGo achieves an improvement of 259.96 in MME benchmark. Furthermore, our experiments reveal that ParGo significantly outperforms other projectors, particularly in tasks that emphasize detail perception ability.
翻译:本研究提出ParGo,一种新颖的局部-全局投影器,旨在为多模态大语言模型连接视觉与语言模态。与以往依赖全局注意力机制投影器的工作不同,我们的ParGo通过整合全局与局部视图来弥合独立预训练的视觉编码器与大语言模型之间的表征鸿沟,从而缓解对显著区域的过度关注。为有效训练ParGo,我们构建了大规模细节标注图文数据集ParGoCap-1M-PT,包含100万张配以高质量描述文本的图像。在多个MLLM基准测试上的广泛实验证明了ParGo的有效性,突显其在对齐视觉与语言模态方面的优越性。相较于传统的Q-Former投影器,我们的ParGo在MME基准上实现了259.96的性能提升。此外,实验表明ParGo显著优于其他投影器,尤其在强调细节感知能力的任务中表现突出。