Visual document understanding (VDU) has rapidly advanced with the development of powerful multi-modal language models. However, these models typically require extensive document pre-training data to learn intermediate representations and often suffer a significant performance drop in real-world online industrial settings. A primary issue is their heavy reliance on OCR engines to extract local positional information within document pages, which limits the models' ability to capture global information and hinders their generalizability, flexibility, and robustness. In this paper, we introduce GlobalDoc, a cross-modal transformer-based architecture pre-trained in a self-supervised manner using three novel pretext objective tasks. GlobalDoc improves the learning of richer semantic concepts by unifying language and visual representations, resulting in more transferable models. For proper evaluation, we also propose two novel document-level downstream VDU tasks, Few-Shot Document Image Classification (DIC) and Content-based Document Image Retrieval (DIR), designed to simulate industrial scenarios more closely. Extensive experimentation has been conducted to demonstrate GlobalDoc's effectiveness in practical settings.
翻译:随着强大多模态语言模型的发展,视觉文档理解领域取得了快速进展。然而,这些模型通常需要大量文档预训练数据来学习中间表示,并且在真实在线工业场景中常出现显著的性能下降。一个核心问题在于它们过度依赖OCR引擎来提取文档页面内的局部位置信息,这限制了模型捕获全局信息的能力,并阻碍了其泛化性、灵活性与鲁棒性。本文提出GlobalDoc,一种基于跨模态Transformer的架构,通过三种新颖的代理目标任务以自监督方式进行预训练。GlobalDoc通过统一语言与视觉表示来改进对更丰富语义概念的学习,从而获得更具可迁移性的模型。为进行合理评估,我们还提出了两项新颖的文档级下游视觉文档理解任务:少样本文档图像分类与基于内容的文档图像检索,其设计更贴近工业场景。大量实验验证了GlobalDoc在实际应用中的有效性。