Multimodal retrieval is the task of aggregating information from queries across heterogeneous modalities to retrieve desired targets. State-of-the-art multimodal retrieval models can understand complex queries, yet they are typically limited to two modalities: text and vision. This limitation impedes the development of universal retrieval systems capable of comprehending queries that combine more than two modalities. To advance toward this goal, we present OmniRet, the first retrieval model capable of handling complex, composed queries spanning three key modalities: text, vision, and audio. Our OmniRet model addresses two critical challenges for universal retrieval: computational efficiency and representation fidelity. First, feeding massive token sequences from modality-specific encoders to Large Language Models (LLMs) is computationally inefficient. We therefore introduce an attention-based resampling mechanism to generate compact, fixed-size representations from these sequences. Second, compressing rich omni-modal data into a single embedding vector inevitably causes information loss and discards fine-grained details. We propose Attention Sliced Wasserstein Pooling to preserve these fine-grained details, leading to improved omni-modal representations. OmniRet is trained on an aggregation of approximately 6 million query-target pairs spanning 30 datasets. We benchmark our model on 13 retrieval tasks and a MMEBv2 subset. Our model demonstrates significant improvements on composed query, audio and video retrieval tasks, while achieving on-par performance with state-of-the-art models on others. Furthermore, we curate a new Audio-Centric Multimodal Benchmark (ACM). This new benchmark introduces two critical, previously missing tasks-composed audio retrieval and audio-visual retrieval to more comprehensively evaluate a model's omni-modal embedding capacity.
翻译:多模态检索是指从异构模态的查询中聚合信息以检索所需目标的任务。当前最先进的多模态检索模型能够理解复杂查询,但通常局限于文本和视觉两种模态。这一限制阻碍了能够理解超过两种模态组合查询的通用检索系统的发展。为实现这一目标,我们提出了OmniRet——首个能够处理涵盖文本、视觉和音频三种关键模态的复杂组合查询的检索模型。我们的OmniRet模型解决了通用检索的两个关键挑战:计算效率与表示保真度。首先,将来自模态特定编码器的大规模token序列输入大语言模型在计算上效率低下。为此,我们引入了一种基于注意力的重采样机制,可从这些序列中生成紧凑的固定大小表示。其次,将丰富的全模态数据压缩为单一嵌入向量不可避免地会导致信息丢失并舍弃细粒度细节。我们提出注意力切片Wasserstein池化方法以保留这些细粒度细节,从而改进全模态表示。OmniRet在约600万查询-目标对(涵盖30个数据集)的聚合数据上进行训练。我们在13项检索任务及一个MMEBv2子集上对模型进行基准测试。该模型在组合查询、音频和视频检索任务上展现出显著提升,同时在其他任务上达到与最先进模型相当的性能。此外,我们构建了新的以音频为中心的多模态基准。该新基准引入了两项此前缺失的关键任务——组合音频检索与音视频检索,以更全面地评估模型的全模态嵌入能力。