Text-to-image diffusion models have demonstrated remarkable capability in generating realistic images from arbitrary text prompts. However, they often produce inconsistent results for compositional prompts such as "two dogs" or "a penguin on the right of a bowl". Understanding these inconsistencies is crucial for reliable image generation. In this paper, we highlight the significant role of initial noise in these inconsistencies, where certain noise patterns are more reliable for compositional prompts than others. Our analyses reveal that different initial random seeds tend to guide the model to place objects in distinct image areas, potentially adhering to specific patterns of camera angles and image composition associated with the seed. To improve the model's compositional ability, we propose a method for mining these reliable cases, resulting in a curated training set of generated images without requiring any manual annotation. By fine-tuning text-to-image models on these generated images, we significantly enhance their compositional capabilities. For numerical composition, we observe relative increases of 29.3% and 19.5% for Stable Diffusion and PixArt-{\alpha}, respectively. Spatial composition sees even larger gains, with 60.7% for Stable Diffusion and 21.1% for PixArt-{\alpha}.
翻译:文本到图像扩散模型已展现出根据任意文本提示生成逼真图像的卓越能力。然而,对于组合式提示(如“两只狗”或“碗右侧的企鹅”),它们往往产生不一致的结果。理解这些不一致性对于可靠的图像生成至关重要。本文中,我们强调了初始噪声在这些不一致性中的重要作用,其中某些噪声模式对于组合式提示比其他模式更为可靠。我们的分析表明,不同的初始随机种子倾向于引导模型将对象放置于不同的图像区域,可能遵循与种子相关的特定相机角度和图像构图模式。为提升模型的组合能力,我们提出了一种挖掘这些可靠案例的方法,从而构建了一个无需人工标注的生成图像精选训练集。通过对这些生成图像微调文本到图像模型,我们显著增强了其组合能力。在数值组合方面,我们观察到 Stable Diffusion 和 PixArt-α 分别实现了 29.3% 和 19.5% 的相对提升。空间组合的提升更为显著,Stable Diffusion 达到 60.7%,PixArt-α 达到 21.1%。