Document layout analysis aims to detect and categorize structural elements (e.g., titles, tables, figures) in scanned or digital documents. Popular methods often rely on high-quality Optical Character Recognition (OCR) to merge visual features with extracted text. This dependency introduces two major drawbacks: propagation of text recognition errors and substantial computational overhead, limiting the robustness and practical applicability of multimodal approaches. In contrast to the prevailing multimodal trend, we argue that effective layout analysis depends not on text-visual fusion, but on a deep understanding of documents' intrinsic visual structure. To this end, we propose PARL (Position-Aware Relation Learning Network), a novel OCR-free, vision-only framework that models layout through positional sensitivity and relational structure. Specifically, we first introduce a Bidirectional Spatial Position-Guided Deformable Attention module to embed explicit positional dependencies among layout elements directly into visual features. Second, we design a Graph Refinement Classifier (GRC) to refine predictions by modeling contextual relationships through a dynamically constructed layout graph. Extensive experiments show PARL achieves state-of-the-art results. It establishes a new benchmark for vision-only methods on DocLayNet and, notably, surpasses even strong multimodal models on M6Doc. Crucially, PARL (65M) is highly efficient, using roughly four times fewer parameters than large multimodal models (256M), demonstrating that sophisticated visual structure modeling can be both more efficient and robust than multimodal fusion.
翻译:文档布局分析旨在检测并分类扫描或数字文档中的结构元素(如标题、表格、图形)。主流方法通常依赖高质量的光学字符识别(OCR)技术,将视觉特征与提取的文本信息融合。这种依赖性带来了两个主要缺陷:文本识别错误的传播以及巨大的计算开销,从而限制了多模态方法的鲁棒性与实际适用性。与当前流行的多模态趋势不同,我们认为有效的布局分析并不依赖于文本-视觉融合,而是基于对文档内在视觉结构的深入理解。为此,我们提出PARL(位置感知关系学习网络),一种新颖的无OCR、纯视觉框架,通过位置敏感性与关系结构对布局进行建模。具体而言,我们首先引入双向空间位置引导的可变形注意力模块,将布局元素间显式的位置依赖关系直接嵌入视觉特征中。其次,我们设计了一个图优化分类器(GRC),通过动态构建的布局图对上下文关系进行建模,以优化预测结果。大量实验表明,PARL取得了最先进的性能。它在DocLayNet数据集上为纯视觉方法设立了新的基准,并且值得注意的是,在M6Doc数据集上甚至超越了强大的多模态模型。关键的是,PARL(65M参数量)具有极高的效率,其参数量约为大型多模态模型(256M)的四分之一,这表明精细的视觉结构建模可以比多模态融合更加高效且鲁棒。