Visual Information Extraction (VIE) plays a crucial role in the comprehension of semi-structured documents, and several pre-trained models have been developed to enhance performance. However, most of these works are monolingual (usually English). Due to the extremely unbalanced quantity and quality of pre-training corpora between English and other languages, few works can extend to non-English scenarios. In this paper, we conduct systematic experiments to show that vision and layout modality hold invariance among images with different languages. If decoupling language bias from document images, a vision-layout-based model can achieve impressive cross-lingual generalization. Accordingly, we present a simple but effective multilingual training paradigm LDP (Language Decoupled Pre-training) for better utilization of monolingual pre-training data. Our proposed model LDM (Language Decoupled Model) is first pre-trained on the language-independent data, where the language knowledge is decoupled by a diffusion model, and then the LDM is fine-tuned on the downstream languages. Extensive experiments show that the LDM outperformed all SOTA multilingual pre-trained models, and also maintains competitiveness on downstream monolingual/English benchmarks.
翻译:视觉信息提取(VIE)在半结构化文档理解中扮演着关键角色,已有多种预训练模型被开发以提升其性能。然而,现有工作大多为单语言(通常是英语)。由于英语与其他语言在预训练语料的数量与质量上存在极度不平衡,鲜有研究能有效扩展至非英语场景。本文通过系统性实验表明,视觉与版面模态在不同语言的图像间具有不变性。若能从文档图像中解耦语言偏差,一个基于视觉-版面的模型能够实现出色的跨语言泛化能力。为此,我们提出了一种简单而有效的多语言训练范式LDP(语言解耦预训练),以更好地利用单语言预训练数据。我们提出的模型LDM(语言解耦模型)首先在语言无关的数据上进行预训练,其中语言知识通过扩散模型进行解耦,随后LDM在下游语言数据上进行微调。大量实验表明,LDM在所有最先进的多语言预训练模型中表现优异,同时在下游单语言/英语基准测试中保持竞争力。