Diffusion models have demonstrated exceptional capabilities in generating a broad spectrum of visual content, yet their proficiency in rendering text is still limited: they often generate inaccurate characters or words that fail to blend well with the underlying image. To address these shortcomings, we introduce a novel framework named, ARTIST, which incorporates a dedicated textual diffusion model to focus on the learning of text structures specifically. Initially, we pretrain this textual model to capture the intricacies of text representation. Subsequently, we finetune a visual diffusion model, enabling it to assimilate textual structure information from the pretrained textual model. This disentangled architecture design and training strategy significantly enhance the text rendering ability of the diffusion models for text-rich image generation. Additionally, we leverage the capabilities of pretrained large language models to interpret user intentions better, contributing to improved generation quality. Empirical results on the MARIO-Eval benchmark underscore the effectiveness of the proposed method, showing an improvement of up to 15% in various metrics.
翻译:扩散模型在生成各类视觉内容方面展现出卓越能力,但其在文本渲染方面的表现仍存在局限:常生成不准确的字符或词汇,且难以与背景图像自然融合。为克服这些缺陷,我们提出名为ARTIST的新型框架,通过引入专用文本扩散模型以专注于文本结构学习。首先,我们对该文本模型进行预训练以捕捉文本表征的复杂特征;随后对视觉扩散模型进行微调,使其能够从预训练的文本模型中吸收文本结构信息。这种解耦的架构设计与训练策略显著增强了扩散模型在富文本图像生成中的文本渲染能力。此外,我们利用预训练大型语言模型更好地解析用户意图,从而提升生成质量。在MARIO-Eval基准测试中的实证结果表明,所提方法在多项指标上最高可提升15%,充分验证了其有效性。