Despite the impressive text-to-image (T2I) synthesis capabilities of diffusion models, they often struggle to understand compositional relationships between objects and attributes, especially in complex settings. Existing solutions have tackled these challenges by optimizing the cross-attention mechanism or learning from the caption pairs with minimal semantic changes. However, can we generate high-quality complex contrastive images that diffusion models can directly discriminate based on visual representations? In this work, we leverage large-language models (LLMs) to compose realistic, complex scenarios and harness Visual-Question Answering (VQA) systems alongside diffusion models to automatically curate a contrastive dataset, ConPair, consisting of 15k pairs of high-quality contrastive images. These pairs feature minimal visual discrepancies and cover a wide range of attribute categories, especially complex and natural scenarios. To learn effectively from these error cases, i.e., hard negative images, we propose EvoGen, a new multi-stage curriculum for contrastive learning of diffusion models. Through extensive experiments across a wide range of compositional scenarios, we showcase the effectiveness of our proposed framework on compositional T2I benchmarks.
翻译:尽管扩散模型在文本到图像(T2I)合成方面展现出令人印象深刻的能力,但它们往往难以理解对象与属性之间的组合关系,尤其是在复杂场景中。现有解决方案通过优化交叉注意力机制,或从语义变化最小的标题对中学习来应对这些挑战。然而,我们能否生成高质量的复杂对比图像,使得扩散模型能够直接基于视觉表征进行区分?在本研究中,我们利用大语言模型(LLMs)来构建真实、复杂的场景,并借助视觉问答(VQA)系统与扩散模型自动构建一个对比数据集 ConPair,该数据集包含 1.5 万对高质量的对比图像。这些图像对具有极小的视觉差异,并涵盖了广泛的属性类别,特别是复杂和自然的场景。为了从这些错误案例(即困难负样本图像)中有效学习,我们提出了 EvoGen,一种用于扩散模型对比学习的多阶段课程学习新方法。通过在广泛的组合场景中进行大量实验,我们在组合性 T2I 基准测试中展示了所提出框架的有效性。