Multi-modal large language models (MLLMs) have made significant strides in various visual understanding tasks. However, the majority of these models are constrained to process low-resolution images, which limits their effectiveness in perception tasks that necessitate detailed visual information. In our study, we present MG-LLaVA, an innovative MLLM that enhances the model's visual processing capabilities by incorporating a multi-granularity vision flow, which includes low-resolution, high-resolution, and object-centric features. We propose the integration of an additional high-resolution visual encoder to capture fine-grained details, which are then fused with base visual features through a Conv-Gate fusion network. To further refine the model's object recognition abilities, we incorporate object-level features derived from bounding boxes identified by offline detectors. Being trained solely on publicly available multimodal data through instruction tuning, MG-LLaVA demonstrates exceptional perception skills. We instantiate MG-LLaVA with a wide variety of language encoders, ranging from 3.8B to 34B, to evaluate the model's performance comprehensively. Extensive evaluations across multiple benchmarks demonstrate that MG-LLaVA outperforms existing MLLMs of comparable parameter sizes, showcasing its remarkable efficacy. The code will be available at https://github.com/PhoenixZ810/MG-LLaVA.
翻译:多模态大语言模型(MLLMs)在各种视觉理解任务中取得了显著进展。然而,大多数现有模型仅限于处理低分辨率图像,这限制了它们在需要精细视觉信息的感知任务中的有效性。在本研究中,我们提出了MG-LLaVA,一种创新的MLLM,它通过整合包含低分辨率、高分辨率和以对象为中心特征的多粒度视觉流,增强了模型的视觉处理能力。我们提出集成一个额外的高分辨率视觉编码器来捕获细粒度细节,这些细节随后通过Conv-Gate融合网络与基础视觉特征进行融合。为了进一步提升模型的对象识别能力,我们整合了由离线检测器识别的边界框所衍生的对象级特征。MG-LLaVA仅通过在公开可用的多模态数据上进行指令微调训练,便展现出卓越的感知能力。我们使用多种语言编码器(参数量从3.8B到34B不等)实例化MG-LLaVA,以全面评估模型性能。在多个基准测试上的广泛评估表明,MG-LLaVA在参数量相当的情况下,性能优于现有的MLLMs,展示了其卓越的有效性。代码将在 https://github.com/PhoenixZ810/MG-LLaVA 发布。