Recent progress in Multimodal Large Language Models (MLLMs) has highlighted the critical roles of both the visual backbone and the underlying language model. While prior work has primarily focused on scaling these components to billions of parameters, the trade-offs between model size, architecture, and performance remain underexplored. Additionally, inconsistencies in training data and evaluation protocols have hindered direct comparisons, making it difficult to derive optimal design choices. In this paper, we introduce LLaVA-MORE, a new family of MLLMs that integrates recent language models with diverse visual backbones. To ensure fair comparisons, we employ a unified training protocol applied consistently across all architectures. Our analysis systematically explores both small- and medium-scale LLMs -- including Phi-4, LLaMA-3.1, and Gemma-2 -- to evaluate multimodal reasoning, generation, and instruction following, while examining the relationship between model size and performance. Beyond evaluating the LLM impact on final results, we conduct a comprehensive study of various visual encoders, ranging from CLIP-based architectures to alternatives such as DINOv2, SigLIP, and SigLIP2. Additional experiments investigate the effects of increased image resolution and variations in pre-training datasets. Overall, our results provide insights into the design of more effective MLLMs, offering a reproducible evaluation framework that facilitates direct comparisons and can guide future model development. Our source code and trained models are publicly available at: https://github.com/aimagelab/LLaVA-MORE.
翻译:多模态大语言模型(MLLMs)的最新进展凸显了视觉骨干网络与底层语言模型的关键作用。尽管先前的研究主要集中于将这些组件扩展至数十亿参数规模,但模型大小、架构与性能之间的权衡关系仍未得到充分探索。此外,训练数据与评估协议的不一致性阻碍了直接比较,使得难以得出最优的设计选择。本文提出了LLaVA-MORE,这是一个整合了最新语言模型与多样化视觉骨干网络的新型MLLM系列。为确保公平比较,我们在所有架构中采用统一的训练协议。我们的分析系统地探索了中小规模的大语言模型——包括Phi-4、LLaMA-3.1和Gemma-2——以评估多模态推理、生成与指令跟随能力,同时考察模型规模与性能之间的关系。除了评估大语言模型对最终结果的影响外,我们还对多种视觉编码器进行了全面研究,涵盖基于CLIP的架构以及DINOv2、SigLIP和SigLIP2等替代方案。额外实验探究了提高图像分辨率及预训练数据集变化的影响。总体而言,我们的结果为设计更有效的MLLMs提供了见解,并提供了一个可复现的评估框架,该框架有助于直接比较并指导未来的模型开发。我们的源代码与训练模型已在以下网址公开:https://github.com/aimagelab/LLaVA-MORE。