Multimodal document retrieval aims to retrieve query-relevant components from documents composed of textual, tabular, and visual elements. An effective multimodal retriever needs to handle two main challenges: (1) mitigate the effect of irrelevant contents caused by fixed, single-granular retrieval units, and (2) support multihop reasoning by effectively capturing semantic relationships among components within and across documents. To address these challenges, we propose LILaC, a multimodal retrieval framework featuring two core innovations. First, we introduce a layered component graph, explicitly representing multimodal information at two layers - each representing coarse and fine granularity - facilitating efficient yet precise reasoning. Second, we develop a late-interaction-based subgraph retrieval method, an edge-based approach that initially identifies coarse-grained nodes for efficient candidate generation, then performs fine-grained reasoning via late interaction. Extensive experiments demonstrate that LILaC achieves state-of-the-art retrieval performance on all five benchmarks, notably without additional fine-tuning. We make the artifacts publicly available at github.com/joohyung00/lilac.
翻译:多模态文档检索旨在从包含文本、表格和视觉元素的文档中检索与查询相关的组件。一个有效的多模态检索器需要应对两个主要挑战:(1) 缓解由固定、单一粒度检索单元导致的无关内容影响;(2) 通过有效捕捉文档内及跨文档组件间的语义关系来支持多跳推理。为解决这些挑战,我们提出了LILaC,一个具有两项核心创新的多模态检索框架。首先,我们引入了分层组件图,在两个层级上显式表示多模态信息——分别代表粗粒度和细粒度——以促进高效而精确的推理。其次,我们开发了一种基于延迟交互的子图检索方法,这是一种基于边的检索策略:该方法首先识别粗粒度节点以高效生成候选集,随后通过延迟交互进行细粒度推理。大量实验表明,LILaC在所有五个基准测试中均取得了最先进的检索性能,且无需额外的微调。我们已将相关资源公开于github.com/joohyung00/lilac。