Multimodal retrieval is the task of aggregating information from queries across heterogeneous modalities to retrieve desired targets. State-of-the-art multimodal retrieval models can understand complex queries, yet they are typically limited to two modalities: text and vision. This limitation impedes the development of universal retrieval systems capable of comprehending queries that combine more than two modalities. To advance toward this goal, we present OmniRet, the first retrieval model capable of handling complex, composed queries spanning three key modalities: text, vision, and audio. Our OmniRet model addresses two critical challenges for universal retrieval: computational efficiency and representation fidelity. First, feeding massive token sequences from modality-specific encoders to Large Language Models (LLMs) is computationally inefficient. We therefore introduce an attention-based resampling mechanism to generate compact, fixed-size representations from these sequences. Second, compressing rich omni-modal data into a single embedding vector inevitably causes information loss and discards fine-grained details. We propose Attention Sliced Wasserstein Pooling to preserve these fine-grained details, leading to improved omni-modal representations. OmniRet is trained on an aggregation of approximately 6 million query-target pairs spanning 30 datasets. We benchmark our model on 13 retrieval tasks and a MMEBv2 subset. Our model demonstrates significant improvements on composed query, audio and video retrieval tasks, while achieving on-par performance with state-of-the-art models on others. Furthermore, we curate a new Audio-Centric Multimodal Benchmark (ACM). This new benchmark introduces two critical, previously missing tasks-composed audio retrieval and audio-visual retrieval to more comprehensively evaluate a model's omni-modal embedding capacity.
翻译:多模态检索任务旨在聚合来自异构模态查询的信息以检索所需目标。当前最先进的多模态检索模型能够理解复杂查询,但通常仅限于文本和视觉两种模态。这一限制阻碍了能够理解超过两种模态组合查询的通用检索系统的发展。为实现这一目标,我们提出了OmniRet,这是首个能够处理跨越文本、视觉和音频三种关键模态的复杂组合查询的检索模型。我们的OmniRet模型解决了通用检索面临的两个关键挑战:计算效率与表示保真度。首先,将来自特定模态编码器的大量标记序列馈送给大型语言模型(LLMs)在计算上是低效的。因此,我们引入了一种基于注意力的重采样机制,从这些序列中生成紧凑的、固定大小的表示。其次,将丰富的全模态数据压缩到单个嵌入向量中不可避免地会导致信息丢失并丢弃细粒度细节。我们提出了注意力切片Wasserstein池化方法来保留这些细粒度细节,从而改进全模态表示。OmniRet在跨越30个数据集的约600万个查询-目标对集合上进行训练。我们在13个检索任务和一个MMEBv2子集上对我们的模型进行了基准测试。我们的模型在组合查询、音频和视频检索任务上表现出显著改进,同时在其它任务上达到了与最先进模型相当的性能。此外,我们策划了一个新的以音频为中心的多模态基准(ACM)。这个新基准引入了两个关键的、先前缺失的任务——组合音频检索和音频-视觉检索,以更全面地评估模型的全模态嵌入能力。