Recent advances in text-to-image (T2I) generation have shown remarkable success in producing high-quality images from text. However, existing T2I models show decayed performance in compositional image generation involving multiple objects and intricate relationships. We attribute this problem to limitations in existing datasets of image-text pairs, which lack precise inter-object relationship annotations with prompts only. To address this problem, we construct LAION-SG, a large-scale dataset with high-quality structural annotations of scene graphs (SG), which precisely describe attributes and relationships of multiple objects, effectively representing the semantic structure in complex scenes. Based on LAION-SG, we train a new foundation model SDXL-SG to incorporate structural annotation information into the generation process. Extensive experiments show advanced models trained on our LAION-SG boast significant performance improvements in complex scene generation over models on existing datasets. We also introduce CompSG-Bench, a benchmark that evaluates models on compositional image generation, establishing a new standard for this domain.
翻译:文本到图像生成领域的最新进展在根据文本生成高质量图像方面取得了显著成功。然而,现有的T2I模型在涉及多个对象和复杂关系的组合式图像生成任务中表现出性能下降。我们将此问题归因于现有图文对数据集的局限性,这些数据集缺乏精确的对象间关系标注,仅依赖提示词。为解决此问题,我们构建了LAION-SG,这是一个具有高质量场景图结构标注的大规模数据集。这些标注精确描述了多个对象的属性和关系,有效地表征了复杂场景中的语义结构。基于LAION-SG,我们训练了一个新的基础模型SDXL-SG,将结构标注信息整合到生成过程中。大量实验表明,在我们LAION-SG数据集上训练的先进模型,在复杂场景生成方面,相比基于现有数据集训练的模型,取得了显著的性能提升。我们还引入了CompSG-Bench,这是一个评估模型组合式图像生成能力的基准测试,为该领域确立了新的标准。