Document comparison typically relies on optical character recognition (OCR) as its core technology. However, OCR requires the selection of appropriate language models for each document and the performance of multilingual or hybrid models remains limited. To overcome these challenges, we propose text change detection (TCD) using an image comparison model tailored for multilingual documents. Unlike OCR-based approaches, our method employs word-level text image-to-image comparison to detect changes. Our model generates bidirectional change segmentation maps between the source and target documents. To enhance performance without requiring explicit text alignment or scaling preprocessing, we employ correlations among multi-scale attention features. We also construct a benchmark dataset comprising actual printed and scanned word pairs in various languages to evaluate our model. We validate our approach using our benchmark dataset and public benchmarks Distorted Document Images and the LRDE Document Binarization Dataset. We compare our model against state-of-the-art semantic segmentation and change detection models, as well as to conventional OCR-based models.
翻译:文档比较通常以光学字符识别(OCR)作为核心技术。然而,OCR需要为每份文档选择合适的语言模型,且多语言或混合模型的性能仍然有限。为克服这些挑战,我们提出一种专为多语言文档设计的、基于图像比较模型的文本变化检测(TCD)方法。与基于OCR的方法不同,本方法采用词级文本图像到图像的比较来检测变化。我们的模型生成源文档与目标文档之间的双向变化分割图。为在不需显式文本对齐或缩放预处理的情况下提升性能,我们利用了多尺度注意力特征之间的相关性。我们还构建了一个包含多种语言的实际印刷与扫描词汇对的基准数据集,用于评估模型性能。我们使用自建基准数据集及公开基准数据集Distorted Document Images和LRDE Document Binarization Dataset验证了本方法的有效性。我们将所提模型与最先进的语义分割模型、变化检测模型以及传统基于OCR的模型进行了对比。