Scene text synthesis involves rendering specified texts onto arbitrary images. Current methods typically formulate this task in an end-to-end manner but lack effective character-level guidance during training. Besides, their text encoders, pre-trained on a single font type, struggle to adapt to the diverse font styles encountered in practical applications. Consequently, these methods suffer from character distortion, repetition, and absence, particularly in polystylistic scenarios. To this end, this paper proposes DreamText for high-fidelity scene text synthesis. Our key idea is to reconstruct the diffusion training process, introducing more refined guidance tailored to this task, to expose and rectify the model's attention at the character level and strengthen its learning of text regions. This transformation poses a hybrid optimization challenge, involving both discrete and continuous variables. To effectively tackle this challenge, we employ a heuristic alternate optimization strategy. Meanwhile, we jointly train the text encoder and generator to comprehensively learn and utilize the diverse font present in the training dataset. This joint training is seamlessly integrated into the alternate optimization process, fostering a synergistic relationship between learning character embedding and re-estimating character attention. Specifically, in each step, we first encode potential character-generated position information from cross-attention maps into latent character masks. These masks are then utilized to update the representation of specific characters in the current step, which, in turn, enables the generator to correct the character's attention in the subsequent steps. Both qualitative and quantitative results demonstrate the superiority of our method to the state of the art.
翻译:场景文本合成涉及将指定文本渲染到任意图像上。现有方法通常以端到端方式构建此任务,但在训练过程中缺乏有效的字符级指导。此外,这些方法的文本编码器在单一字体类型上预训练,难以适应实际应用中遇到的多样字体风格。因此,这些方法存在字符扭曲、重复和缺失的问题,尤其在多风格场景中更为突出。为此,本文提出DreamText用于高保真场景文本合成。我们的核心思想是重构扩散训练过程,引入针对此任务定制的更精细指导,以在字符层面暴露并修正模型的注意力,并加强其对文本区域的学习。这一转变带来了涉及离散和连续变量的混合优化挑战。为有效应对此挑战,我们采用启发式交替优化策略。同时,我们联合训练文本编码器和生成器,以全面学习并利用训练数据集中存在的多样字体。这种联合训练无缝集成到交替优化过程中,促进了字符嵌入学习与字符注意力重估计之间的协同关系。具体而言,在每一步中,我们首先从交叉注意力图中将潜在字符生成的位置信息编码为潜在字符掩码。这些掩码随后被用于更新当前步骤中特定字符的表征,这进而使生成器能够在后续步骤中修正字符的注意力。定性和定量结果均证明了我们的方法相对于现有技术的优越性。