Recently, the advent of Large Visual-Language Models (LVLMs) has received increasing attention across various domains, particularly in the field of visual document understanding (VDU). Different from conventional vision-language tasks, VDU is specifically concerned with text-rich scenarios containing abundant document elements. Nevertheless, the importance of fine-grained features remains largely unexplored within the community of LVLMs, leading to suboptimal performance in text-rich scenarios. In this paper, we abbreviate it as the fine-grained feature collapse issue. With the aim of filling this gap, we propose a contrastive learning framework, termed Document Object COntrastive learning (DoCo), specifically tailored for the downstream tasks of VDU. DoCo leverages an auxiliary multimodal encoder to obtain the features of document objects and align them to the visual features generated by the vision encoder of LVLM, which enhances visual representation in text-rich scenarios. It can represent that the contrastive learning between the visual holistic representations and the multimodal fine-grained features of document objects can assist the vision encoder in acquiring more effective visual cues, thereby enhancing the comprehension of text-rich documents in LVLMs. We also demonstrate that the proposed DoCo serves as a plug-and-play pre-training method, which can be employed in the pre-training of various LVLMs without inducing any increase in computational complexity during the inference process. Extensive experimental results on multiple benchmarks of VDU reveal that LVLMs equipped with our proposed DoCo can achieve superior performance and mitigate the gap between VDU and generic vision-language tasks.
翻译:近期,大型视觉-语言模型(LVLMs)的出现引发了各领域的广泛关注,尤其在视觉文档理解(VDU)领域。与常规视觉-语言任务不同,VDU专注于包含丰富文档元素的文本密集型场景。然而,LVLMs领域对细粒度特征的重要性尚未充分探索,导致在文本密集型场景中的性能欠佳。本文将此问题简称为“细粒度特征坍缩问题”。为填补这一空白,我们提出了一种专为VDU下游任务设计的对比学习框架——文档对象对比学习(DoCo)。DoCo利用辅助多模态编码器提取文档对象的特征,并将其与LVLM视觉编码器生成的视觉特征对齐,从而增强文本密集型场景中的视觉表征。研究表明,视觉整体表征与文档对象多模态细粒度特征之间的对比学习,有助于视觉编码器获取更有效的视觉线索,进而提升LVLMs对文本密集型文档的理解能力。同时,我们证明所提出的DoCo作为一种即插即用的预训练方法,可在不增加推理阶段计算复杂度的前提下,应用于多种LVLMs的预训练过程。在多个VDU基准上的大量实验结果表明,配备DoCo的LVLMs能够实现更优性能,并缩小VDU与通用视觉-语言任务之间的差距。