Detecting tampered text in document images is a challenging task due to data scarcity. To address this, previous work has attempted to generate tampered documents using rule-based methods. However, the resulting documents often suffer from limited variety and poor visual quality, typically leaving highly visible artifacts that are rarely observed in real-world manipulations. This undermines the model's ability to learn robust, generalizable features and results in poor performance on real-world data. Motivated by this discrepancy, we propose a novel method for generating high-quality tampered document images. We first train an auxiliary network to compare text crops, leveraging contrastive learning with a novel strategy for defining positive pairs and their corresponding negatives. We also train a second auxiliary network to evaluate whether a crop tightly encloses the intended characters, without cutting off parts of characters or including parts of adjacent ones. Using a carefully designed generation pipeline that leverages both networks, we introduce a framework capable of producing diverse, high-quality tampered document images. We assess the effectiveness of our data generation pipeline by training multiple models on datasets derived from the same source images, generated using our method and existing approaches, under identical training protocols. Evaluating these models on various open-source datasets shows that our pipeline yields consistent performance improvements across architectures and datasets.
翻译:文档图像中篡改文本的检测因数据稀缺而成为一项具有挑战性的任务。为解决此问题,先前工作尝试使用基于规则的方法生成篡改文档。然而,所生成的文档通常存在多样性有限和视觉质量不佳的问题,通常会留下高度可见的伪影,这些伪影在真实世界的篡改中很少观察到。这削弱了模型学习鲁棒、可泛化特征的能力,并导致在真实数据上表现不佳。受此差异的启发,我们提出了一种生成高质量篡改文档图像的新方法。我们首先训练一个辅助网络来比较文本裁剪区域,利用对比学习及一种定义正样本对及其对应负样本的新策略。我们还训练了第二个辅助网络,用于评估一个裁剪区域是否紧密包围目标字符,既未切断字符部分,也未包含相邻字符的部分。通过利用这两个网络精心设计的生成流程,我们引入了一个能够生成多样化、高质量篡改文档图像的框架。我们通过在同一训练协议下,使用基于相同源图像、分别由我们的方法和现有方法生成的数据集训练多个模型,来评估我们数据生成流程的有效性。在多个开源数据集上对这些模型进行评估表明,我们的流程能在不同架构和数据集上带来一致的性能提升。