We present Florence-VL, a new family of multimodal large language models (MLLMs) with enriched visual representations produced by Florence-2, a generative vision foundation model. Unlike the widely used CLIP-style vision transformer trained by contrastive learning, Florence-2 can capture different levels and aspects of visual features, which are more versatile to be adapted to diverse downstream tasks. We propose a novel feature-fusion architecture and an innovative training recipe that effectively integrates Florence-2's visual features into pretrained LLMs, such as Phi 3.5 and LLama 3. In particular, we propose "depth-breath fusion (DBFusion)" to fuse the visual features extracted from different depths and under multiple prompts. Our model training is composed of end-to-end pretraining of the whole model followed by finetuning of the projection layer and the LLM, on a carefully designed recipe of diverse open-source datasets that include high-quality image captions and instruction-tuning pairs. Our quantitative analysis and visualization of Florence-VL's visual features show its advantages over popular vision encoders on vision-language alignment, where the enriched depth and breath play important roles. Florence-VL achieves significant improvements over existing state-of-the-art MLLMs across various multi-modal and vision-centric benchmarks covering general VQA, perception, hallucination, OCR, Chart, knowledge-intensive understanding, etc. To facilitate future research, our models and the complete training recipe are open-sourced. https://github.com/JiuhaiChen/Florence-VL
翻译:我们提出Florence-VL,这是一个新的多模态大语言模型(MLLM)系列,其丰富的视觉表征由生成式视觉基础模型Florence-2产生。与广泛使用的、通过对比学习训练的CLIP风格视觉Transformer不同,Florence-2能够捕捉不同层次和不同方面的视觉特征,从而能更灵活地适应多样化的下游任务。我们提出了一种新颖的特征融合架构和创新的训练方案,有效地将Florence-2的视觉特征集成到预训练的大语言模型(如Phi 3.5和LLama 3)中。特别地,我们提出了“深度-广度融合(DBFusion)”方法,以融合从不同深度和多种提示下提取的视觉特征。我们的模型训练包括对整个模型进行端到端的预训练,随后在精心设计的、包含高质量图像描述和指令调优对的多样化开源数据集配方上,对投影层和大语言模型进行微调。我们对Florence-VL视觉特征的定量分析和可视化表明,其在视觉-语言对齐方面优于流行的视觉编码器,其中增强的深度和广度特征发挥了重要作用。Florence-VL在涵盖通用视觉问答、感知、幻觉、光学字符识别、图表、知识密集型理解等多种多模态和以视觉为中心的基准测试中,相比现有最先进的多模态大语言模型取得了显著提升。为了促进未来研究,我们的模型和完整的训练方案均已开源。https://github.com/JiuhaiChen/Florence-VL