Recent advances in diffusion models can generate high-quality and stunning images from text. However, multi-turn image generation, which is of high demand in real-world scenarios, still faces challenges in maintaining semantic consistency between images and texts, as well as contextual consistency of the same subject across multiple interactive turns. To address this issue, we introduce TheaterGen, a training-free framework that integrates large language models (LLMs) and text-to-image (T2I) models to provide the capability of multi-turn image generation. Within this framework, LLMs, acting as a "Screenwriter", engage in multi-turn interaction, generating and managing a standardized prompt book that encompasses prompts and layout designs for each character in the target image. Based on these, Theatergen generate a list of character images and extract guidance information, akin to the "Rehearsal". Subsequently, through incorporating the prompt book and guidance information into the reverse denoising process of T2I diffusion models, Theatergen generate the final image, as conducting the "Final Performance". With the effective management of prompt books and character images, TheaterGen significantly improves semantic and contextual consistency in synthesized images. Furthermore, we introduce a dedicated benchmark, CMIGBench (Consistent Multi-turn Image Generation Benchmark) with 8000 multi-turn instructions. Different from previous multi-turn benchmarks, CMIGBench does not define characters in advance. Both the tasks of story generation and multi-turn editing are included on CMIGBench for comprehensive evaluation. Extensive experimental results show that TheaterGen outperforms state-of-the-art methods significantly. It raises the performance bar of the cutting-edge Mini DALLE 3 model by 21% in average character-character similarity and 19% in average text-image similarity.
翻译:摘要:近年来,扩散模型在根据文本生成高质量且令人惊叹的图像方面取得了显著进展。然而,在现实场景中需求旺盛的多轮图像生成依然面临两大挑战:图像与文本之间的语义一致性,以及同一主体在多轮交互中的上下文连贯性。为解决此问题,我们提出了TheaterGen——一种无需训练的框架,该框架整合大语言模型(LLM)与文本到图像(T2I)模型,赋予多轮图像生成能力。在该框架中,大语言模型扮演“编剧”角色,通过多轮交互生成并管理标准化提示手册,其中包含目标图像中每个角色的提示词与布局设计。随后,TheaterGen生成一系列角色图像并提取引导信息,类似“彩排”环节。最后,通过将提示手册与引导信息融入T2I扩散模型的反向去噪过程,TheaterGen生成最终图像,完成“终场演出”。凭借对提示手册与角色图像的高效管理,TheaterGen显著提升了合成图像的语义一致性与上下文连贯性。此外,我们引入了专用基准测试CMIGBench(一致性多轮图像生成基准),包含8000条多轮指令。与以往的多轮基准不同,CMIGBench无需预定义角色,同时涵盖故事生成与多轮编辑两类任务以进行综合评估。大量实验结果表明,TheaterGen性能显著优于现有最先进方法:相较前沿的Mini DALLE 3模型,其平均角色-角色相似度提升21%,平均文本-图像相似度提升19%。