Text-to-image (T2I) generation has seen significant progress with diffusion models, enabling generation of photo-realistic images from text prompts. Despite this progress, existing methods still face challenges in following complex text prompts, especially those requiring compositional and multi-step reasoning. Given such complex instructions, SOTA models often make mistakes in faithfully modeling object attributes, and relationships among them. In this work, we present an alternate paradigm for T2I synthesis, decomposing the task of complex multi-step generation into three steps, (a) Generate: we first generate an image using existing diffusion models (b) Plan: we make use of Multi-Modal LLMs (MLLMs) to identify the mistakes in the generated image expressed in terms of individual objects and their properties, and produce a sequence of corrective steps required in the form of an edit-plan. (c) Edit: we make use of an existing text-guided image editing models to sequentially execute our edit-plan over the generated image to get the desired image which is faithful to the original instruction. Our approach derives its strength from the fact that it is modular in nature, is training free, and can be applied over any combination of image generation and editing models. As an added contribution, we also develop a model capable of compositional editing, which further helps improve the overall accuracy of our proposed approach. Our method flexibly trades inference time compute with performance on compositional text prompts. We perform extensive experimental evaluation across 3 benchmarks and 10 T2I models including DALLE-3 and the latest -- SD-3.5-Large. Our approach not only improves the performance of the SOTA models, by upto 3 points, it also reduces the performance gap between weaker and stronger models. $\href{https://dair-iitd.github.io/GraPE/}{https://dair-iitd.github.io/GraPE/}$
翻译:文本到图像(T2I)生成随着扩散模型的发展取得了显著进步,使得从文本提示生成逼真的图像成为可能。尽管取得了这些进展,现有方法在遵循复杂文本提示,特别是那些需要组合和多步推理的提示方面,仍然面临挑战。面对此类复杂指令,最先进的模型在忠实建模对象属性及其相互关系时常常出错。在这项工作中,我们提出了一种用于T2I合成的替代范式,将复杂的多步生成任务分解为三个步骤:(a)生成:首先使用现有扩散模型生成图像;(b)规划:利用多模态大语言模型(MLLMs)识别生成图像中的错误,这些错误以单个对象及其属性来描述,并以编辑计划的形式生成所需的一系列纠正步骤;(c)编辑:利用现有的文本引导图像编辑模型,在生成的图像上顺序执行我们的编辑计划,以获得忠实于原始指令的期望图像。我们方法的力量源于其本质上的模块化、无需训练,并且可以应用于任何图像生成和编辑模型的组合。作为一项额外贡献,我们还开发了一个能够进行组合编辑的模型,这进一步有助于提高我们提出的方法的整体准确性。我们的方法灵活地在推理时间计算与组合文本提示的性能之间进行权衡。我们在包括DALLE-3和最新的SD-3.5-Large在内的3个基准测试和10个T2I模型上进行了广泛的实验评估。我们的方法不仅将最先进模型的性能提高了多达3个百分点,还缩小了较弱模型与较强模型之间的性能差距。$\href{https://dair-iitd.github.io/GraPE/}{https://dair-iitd.github.io/GraPE/}$