Existing retrieval benchmarks primarily consist of text-based queries where keyword or semantic matching is usually sufficient. Many real-world queries contain multimodal elements, particularly, images such as diagrams, charts, and screenshots that require intensive reasoning to identify relevant documents. To address this gap, we introduce MM-BRIGHT, the first multimodal benchmark for reasoning-intensive retrieval. Our dataset consists of 2,803 real-world queries spanning 29 diverse technical domains, with four tasks of increasing complexity: text-to-text, multimodal-to-text, multimodal-to-image, and multimodal-to-multimodal retrieval. Extensive evaluation reveals that state-of-the-art models struggle across all tasks: BM25 achieves only 8.5 nDCG@10 on text-only retrieval, while the best multimodal model Nomic-Vision reaches just 27.6 nDCG@10 on multimodal-to-text retrieval actually underperforming the best text-only model (DiVeR: 32.2). These results highlight substantial headroom and position MM-BRIGHT as a testbed for next-generation retrieval models that better integrate visual reasoning. Our code and data are available at https://github.com/mm-bright/MM-BRIGHT. See also our official website: https://mm-bright.github.io/.
翻译:现有检索基准主要由基于文本的查询构成,其中关键词或语义匹配通常已足够。然而,许多现实世界查询包含多模态元素,特别是如图表、示意图和屏幕截图等图像,这些元素需要深入的推理才能识别相关文档。为填补这一空白,我们提出了MM-BRIGHT,首个面向推理密集型检索的多模态基准。我们的数据集包含2,803个涵盖29个不同技术领域的真实世界查询,并设置了四种复杂度递增的任务:文本到文本检索、多模态到文本检索、多模态到图像检索以及多模态到多模态检索。广泛评估表明,最先进的模型在所有任务上均表现不佳:BM25在纯文本检索上仅获得8.5的nDCG@10,而最佳多模态模型Nomic-Vision在多模态到文本检索上仅达到27.6的nDCG@10,实际上表现不如最佳纯文本模型(DiVeR:32.2)。这些结果凸显了巨大的提升空间,并将MM-BRIGHT定位为下一代能更好整合视觉推理的检索模型的测试平台。我们的代码和数据可在 https://github.com/mm-bright/MM-BRIGHT 获取。另请参见我们的官方网站:https://mm-bright.github.io/。