Restoring low-resolution text images presents a significant challenge, as it requires maintaining both the fidelity and stylistic realism of the text in restored images. Existing text image restoration methods often fall short in hard situations, as the traditional super-resolution models cannot guarantee clarity, while diffusion-based methods fail to maintain fidelity. In this paper, we introduce a novel framework aimed at improving the generalization ability of diffusion models for text image super-resolution (SR), especially promoting fidelity. First, we propose a progressive data sampling strategy that incorporates diverse image types at different stages of training, stabilizing the convergence and improving the generalization. For the network architecture, we leverage a pre-trained SR prior to provide robust spatial reasoning capabilities, enhancing the model's ability to preserve textual information. Additionally, we employ a cross-attention mechanism to better integrate textual priors. To further reduce errors in textual priors, we utilize confidence scores to dynamically adjust the importance of textual features during training. Extensive experiments on real-world datasets demonstrate that our approach not only produces text images with more realistic visual appearances but also improves the accuracy of text structure.
翻译:恢复低分辨率文本图像是一项重大挑战,因为它需要在恢复的图像中同时保持文本的保真度和风格真实性。现有的文本图像恢复方法在困难情况下往往表现不佳,因为传统的超分辨率模型无法保证清晰度,而基于扩散的方法则难以维持保真度。本文提出了一种新颖的框架,旨在提升扩散模型在文本图像超分辨率任务中的泛化能力,特别是增强其保真度。首先,我们提出了一种渐进式数据采样策略,在训练的不同阶段融入多样化的图像类型,以稳定收敛并改善泛化性能。在网络架构方面,我们利用预训练的超分辨率先验来提供强大的空间推理能力,从而增强模型保留文本信息的能力。此外,我们采用交叉注意力机制以更好地整合文本先验。为了进一步减少文本先验中的误差,我们利用置信度分数在训练过程中动态调整文本特征的重要性。在真实世界数据集上进行的大量实验表明,我们的方法不仅能够生成具有更逼真视觉外观的文本图像,还能提高文本结构的准确性。