Over the past few years, Text-to-Image (T2I) generation approaches based on diffusion models have gained significant attention. However, vanilla diffusion models often suffer from spelling inaccuracies in the text displayed within the generated images. The capability to generate visual text is crucial, offering both academic interest and a wide range of practical applications. To produce accurate visual text images, state-of-the-art techniques adopt a glyph-controlled image generation approach, consisting of a text layout generator followed by an image generator that is conditioned on the generated text layout. Nevertheless, our study reveals that these models still face three primary challenges, prompting us to develop a testbed to facilitate future research. We introduce a benchmark, LenCom-Eval, specifically designed for testing models' capability in generating images with Lengthy and Complex visual text. Subsequently, we introduce a training-free framework to enhance the two-stage generation approaches. We examine the effectiveness of our approach on both LenCom-Eval and MARIO-Eval benchmarks and demonstrate notable improvements across a range of evaluation metrics, including CLIPScore, OCR precision, recall, F1 score, accuracy, and edit distance scores. For instance, our proposed framework improves the backbone model, TextDiffuser, by more than 23\% and 13.5\% in terms of OCR word F1 on LenCom-Eval and MARIO-Eval, respectively. Our work makes a unique contribution to the field by focusing on generating images with long and rare text sequences, a niche previously unexplored by existing literature
翻译:近年来,基于扩散模型的文本到图像(T2I)生成方法备受关注。然而,标准扩散模型在生成图像中的文字时常常出现拼写错误。生成视觉文字的能力至关重要,既具有学术价值,也涵盖广泛的实际应用。为生成精确的视觉文字图像,现有先进技术采用字形控制图像生成方法,包括文本布局生成器及后续基于生成文本布局的条件图像生成器。然而,我们的研究发现这些模型仍面临三大主要挑战,因此我们开发了一个测试平台以促进未来研究。我们提出一个名为LenCom-Eval的基准测试,专门用于测试模型生成长且复杂视觉文字图像的能力。随后,我们引入一个无训练框架来增强两阶段生成方法。我们在LenCom-Eval和MARIO-Eval基准测试中验证了本方法的有效性,并在CLIPScore、OCR精确率、召回率、F1分数、准确率及编辑距离评分等多项评估指标上展现了显著改进。例如,我们提出的框架将基线模型TextDiffuser在LenCom-Eval和MARIO-Eval上的OCR词级F1分数分别提升了超过23%和13.5%。本研究通过聚焦于生成长文本与罕见文本序列图像这一现有文献尚未探索的独特领域,为该领域做出了独特贡献。