This paper defines and explores the design space for information extraction (IE) from layout-rich documents using large language models (LLMs). The three core challenges of layout-aware IE with LLMs are 1) data structuring, 2) model engagement, and 3) output refinement. Our study investigates the sub-problems and methods within these core challenges, such as input representation, chunking, prompting, selection of LLMs, and multimodal models. It examines the effect of different design choices through LayIE-LLM, a new, open-source, layout-aware IE test suite, benchmarking against traditional, fine-tuned IE models. The results on two IE datasets show that LLMs require adjustment of the IE pipeline to achieve competitive performance: the optimized configuration found with LayIE-LLM achieves 13.3--37.5 F1 points more than a general-practice baseline configuration using the same LLM. To find a well-working configuration, we develop a one-factor-at-a-time (OFAT) method that achieves near-optimal results. Our method is only 0.8--1.8 points lower than the best full factorial exploration with a fraction (2.8%) of the required computation. Overall, we demonstrate that, if well-configured, general-purpose LLMs match the performance of specialized models, providing a cost-effective, finetuning-free alternative. Our test-suite is available at https://github.com/gayecolakoglu/LayIE-LLM.
翻译:本文定义并探索了利用大语言模型从富布局文档中进行信息提取的设计空间。使用大语言模型进行布局感知信息提取面临的三大核心挑战是:1) 数据结构化,2) 模型交互,以及 3) 输出精炼。本研究深入探讨了这些核心挑战下的子问题与方法,例如输入表示、分块处理、提示工程、大语言模型选择以及多模态模型应用。通过新开发的开源布局感知信息提取测试套件 LayIE-LLM,我们系统评估了不同设计选择的影响,并与传统的微调信息提取模型进行了基准比较。在两个信息提取数据集上的实验结果表明,大语言模型需要对信息提取流程进行调整才能获得有竞争力的性能:通过 LayIE-LLM 寻得的最优配置方案,相比使用同一大语言模型的通用实践基线配置,在 F1 分数上提升了 13.3 至 37.5 分。为找到高效可用的配置方案,我们开发了一种单因素轮换优化方法,该方法能以极低计算成本(仅需全因子探索计算量的 2.8%)获得接近最优的结果,其性能仅比最佳全因子探索低 0.8 至 1.8 分。总体而言,我们证明经过合理配置的通用大语言模型能够达到专用模型的性能水平,为信息提取任务提供了一种经济高效且无需微调的替代方案。我们的测试套件已发布于 https://github.com/gayecolakoglu/LayIE-LLM。