Current Retrieval-Augmented Generation (RAG) systems primarily operate on unimodal textual data, limiting their effectiveness on unstructured multimodal documents. Such documents often combine text, images, tables, equations, and graphs, each contributing unique information. In this work, we present a Modality-Aware Hybrid retrieval Architecture (MAHA), designed specifically for multimodal question answering with reasoning through a modality-aware knowledge graph. MAHA integrates dense vector retrieval with structured graph traversal, where the knowledge graph encodes cross-modal semantics and relationships. This design enables both semantically rich and context-aware retrieval across diverse modalities. Evaluations on multiple benchmark datasets demonstrate that MAHA substantially outperforms baseline methods, achieving a ROUGE-L score of 0.486, providing complete modality coverage. These results highlight MAHA's ability to combine embeddings with explicit document structure, enabling effective multimodal retrieval. Our work establishes a scalable and interpretable retrieval framework that advances RAG systems by enabling modality-aware reasoning over unstructured multimodal data.
翻译:当前的检索增强生成系统主要处理单模态文本数据,这限制了其在非结构化多模态文档上的有效性。此类文档通常融合了文本、图像、表格、公式和图表,每种模态都贡献了独特的信息。在本工作中,我们提出了一种模态感知混合检索架构,专为通过模态感知知识图谱进行推理的多模态问答任务而设计。MAHA将稠密向量检索与结构化图谱遍历相结合,其中知识图谱编码了跨模态语义和关系。这种设计使得系统能够在不同模态间实现语义丰富且上下文感知的检索。在多个基准数据集上的评估表明,MAHA显著优于基线方法,取得了0.486的ROUGE-L分数,并实现了完整的模态覆盖。这些结果凸显了MAHA将嵌入表示与显式文档结构相结合的能力,从而实现了有效的多模态检索。我们的工作建立了一个可扩展且可解释的检索框架,通过支持对非结构化多模态数据进行模态感知推理,推动了RAG系统的发展。