Embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering. Recently, there has been a surge of interest in developing universal text embedding models that can generalize across tasks (e.g., MTEB). However, progress in learning universal multimodal embedding models has been relatively slow despite their importance. In this work, we aim to explore the potential for building universal embeddings capable of handling a wide range of downstream tasks. Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks (i.e. classification, visual question answering, multimodal retrieval, and visual grounding) and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB. Unlike previous models such as CLIP and BLIP, VLM2Vec can process any combination of images and text to generate a fixed-dimensional vector based on task instructions. We build a series of VLM2Vec models on Phi-3.5-V and evaluate them on MMEB's evaluation split. Our results show that \model achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models on both in-distribution and out-of-distribution datasets in MMEB.
翻译:嵌入模型对于实现语义相似性、信息检索和聚类等多种下游任务至关重要。近年来,开发能够跨任务泛化的通用文本嵌入模型(如MTEB)引起了广泛关注。然而,尽管通用多模态嵌入模型具有重要意义,其学习进展相对缓慢。本工作旨在探索构建能够处理广泛下游任务的通用嵌入模型的潜力。我们的贡献包括两个方面:(1)MMEB(大规模多模态嵌入基准),涵盖4个元任务(即分类、视觉问答、多模态检索和视觉定位)及36个数据集,其中包含20个训练数据集和16个评估数据集;(2)VLM2Vec(视觉-语言模型->向量),一种对比训练框架,通过在MMEB上进行训练,可将任何先进的视觉-语言模型转换为嵌入模型。与CLIP和BLIP等先前模型不同,VLM2Vec能够处理任意图像与文本的组合,并根据任务指令生成固定维度的向量。我们基于Phi-3.5-V构建了一系列VLM2Vec模型,并在MMEB的评估集上进行测试。结果表明,在MMEB的分布内与分布外数据集上,\model相比现有多模态嵌入模型实现了10%至20%的绝对平均性能提升。