Multimodal retrieval-augmented Generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented -- focusing on either text or images in isolation, or simplified multimodal setup, failing to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from $k$ real-world PDF pages across domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, all of QA pairs are validated by multiple human annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: 1) text-only, 2) image-only, 3) \emph{multimodal} text-image fusion and 4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. UniDoc-Bench can also be used to evaluate Visual Question Answering (VQA) tasks. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.
翻译:多模态检索增强生成(MM-RAG)是将大语言模型(LLM)与智能体应用于现实世界知识库的关键方法,然而当前的评估体系是割裂的——要么孤立地关注文本或图像,要么采用简化的多模态设置,未能捕捉以文档为中心的多模态应用场景。本文介绍了UniDoc-Bench,这是首个基于$k$个跨领域真实世界PDF页面构建的大规模、现实的多模态检索增强生成基准。我们的处理流程从文本、表格和图形中提取并关联证据,然后生成涵盖事实检索、比较、摘要和逻辑推理查询的多模态问答对。为确保可靠性,所有问答对均经过多名人工标注者的验证和专家裁决。UniDoc-Bench支持在统一协议下对四种范式进行公平比较:1) 纯文本,2) 纯图像,3) *多模态*文本-图像融合,以及4) 多模态联合检索——该协议包含标准化的候选池、提示词和评估指标。UniDoc-Bench也可用于评估视觉问答(VQA)任务。我们的实验表明,多模态文本-图像融合的RAG系统在性能上持续优于基于单模态和联合多模态嵌入的检索方法,这表明仅凭文本或图像均不充分,且当前的多模态嵌入方法仍然存在不足。除了基准测试,我们的分析揭示了视觉上下文在何时以及如何补充文本证据,发现了系统性的失败模式,并为开发更鲁棒的MM-RAG流程提供了可操作的指导。