In the domain of Document AI, parsing semi-structured image form is a crucial Key Information Extraction (KIE) task. The advent of pre-trained multimodal models significantly empowers Document AI frameworks to extract key information from form documents in different formats such as PDF, Word, and images. Nonetheless, form parsing is still encumbered by notable challenges like subpar capabilities in multilingual parsing and diminished recall in industrial contexts in rich text and rich visuals. In this work, we introduce a simple but effective \textbf{M}ultimodal and \textbf{M}ultilingual semi-structured \textbf{FORM} \textbf{PARSER} (\textbf{XFormParser}), which anchored on a comprehensive Transformer-based pre-trained language model and innovatively amalgamates semantic entity recognition (SER) and relation extraction (RE) into a unified framework. Combined with Bi-LSTM, the performance of multilingual parsing is significantly improved. Furthermore, we develop InDFormSFT, a pioneering supervised fine-tuning (SFT) industrial dataset that specifically addresses the parsing needs of forms in various industrial contexts. XFormParser has demonstrated its unparalleled effectiveness and robustness through rigorous testing on established benchmarks. Compared to existing state-of-the-art (SOTA) models, XFormParser notably achieves up to 1.79\% F1 score improvement on RE tasks in language-specific settings. It also exhibits exceptional cross-task performance improvements in multilingual and zero-shot settings. The codes, datasets, and pre-trained models are publicly available at https://github.com/zhbuaa0/xformparser.
翻译:在文档智能领域,解析半结构化图像表单是一项关键的关键信息抽取任务。预训练多模态模型的出现极大地增强了文档智能框架从PDF、Word和图像等不同格式的表单文档中提取关键信息的能力。然而,表单解析仍面临显著挑战,例如多语言解析能力不足,以及在工业场景中处理富文本和富视觉内容时召回率降低。本文提出了一种简单而有效的**多模态**与**多语言**半结构化**表单解析器**,其基于一个全面的Transformer预训练语言模型,创新性地将语义实体识别和关系抽取统一到一个框架中。结合Bi-LSTM,多语言解析性能得到显著提升。此外,我们开发了InDFormSFT,这是一个开创性的监督微调工业数据集,专门针对各种工业场景中的表单解析需求。通过在成熟基准测试上的严格验证,XFormParser展现了其卓越的有效性和鲁棒性。与现有最先进模型相比,XFormParser在特定语言设置的关系抽取任务上实现了高达1.79%的F1分数提升,并在多语言和零样本设置中表现出优异的跨任务性能改进。相关代码、数据集和预训练模型已公开于https://github.com/zhbuaa0/xformparser。