Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts. While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation. To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark ViDoRe, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings. The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture, ColPali, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages. Combined with a late interaction matching mechanism, ColPali largely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.
翻译:文档是视觉信息丰富的结构体,通过文本以及表格、图像、页面布局或字体等多种形式传递信息。尽管现代文档检索系统在查询-文本匹配方面表现出色,但其难以有效利用视觉线索,这阻碍了其在检索增强生成等实际文档检索应用中的性能表现。为评估现有系统在视觉丰富文档检索任务上的能力,我们提出了视觉文档检索基准ViDoRe,该基准包含跨多领域、多语言及多场景的页面级检索任务。现代系统存在的固有缺陷促使我们提出一种新的检索模型架构——ColPali,该架构利用近期视觉语言模型的文档理解能力,仅通过文档页面图像即可生成高质量的上下文嵌入表示。结合延迟交互匹配机制,ColPali在显著提升检索速度且支持端到端训练的同时,其性能大幅超越现代文档检索流程。