Compositionality is a critical capability in Text-to-Image (T2I) models, as it reflects their ability to understand and combine multiple concepts from text descriptions. Existing evaluations of compositional capability rely heavily on human-designed text prompts or fixed templates, limiting their diversity and complexity, and yielding low discriminative power. We propose ConceptMix, a scalable, controllable, and customizable benchmark which automatically evaluates compositional generation ability of T2I models. This is done in two stages. First, ConceptMix generates the text prompts: concretely, using categories of visual concepts (e.g., objects, colors, shapes, spatial relationships), it randomly samples an object and k-tuples of visual concepts, then uses GPT4-o to generate text prompts for image generation based on these sampled concepts. Second, ConceptMix evaluates the images generated in response to these prompts: concretely, it checks how many of the k concepts actually appeared in the image by generating one question per visual concept and using a strong VLM to answer them. Through administering ConceptMix to a diverse set of T2I models (proprietary as well as open ones) using increasing values of k, we show that our ConceptMix has higher discrimination power than earlier benchmarks. Specifically, ConceptMix reveals that the performance of several models, especially open models, drops dramatically with increased k. Importantly, it also provides insight into the lack of prompt diversity in widely-used training datasets. Additionally, we conduct extensive human studies to validate the design of ConceptMix and compare our automatic grading with human judgement. We hope it will guide future T2I model development.
翻译:组合性是文本到图像(T2I)模型的一项关键能力,它反映了模型理解和组合文本描述中多个概念的能力。现有的组合能力评估严重依赖人工设计的文本提示或固定模板,限制了其多样性和复杂性,导致区分度较低。我们提出了概念混合,这是一个可扩展、可控且可定制的基准,用于自动评估T2I模型的组合生成能力。这通过两个阶段实现。首先,概念混合生成文本提示:具体而言,它利用视觉概念的类别(例如,物体、颜色、形状、空间关系),随机采样一个物体和k元组的视觉概念,然后使用GPT4-o基于这些采样的概念生成用于图像生成的文本提示。其次,概念混合评估针对这些提示生成的图像:具体而言,它通过为每个视觉概念生成一个问题并使用一个强大的视觉语言模型来回答,从而检查图像中实际出现了多少个k个概念。通过对一系列多样化的T2I模型(包括专有模型和开源模型)应用概念混合并逐步增加k值,我们表明我们的概念混合比早期的基准具有更高的区分能力。具体来说,概念混合揭示了几种模型(尤其是开源模型)的性能随着k值的增加而急剧下降。重要的是,它还揭示了广泛使用的训练数据集中提示多样性的不足。此外,我们进行了广泛的人工研究,以验证概念混合的设计,并将我们的自动评分与人类判断进行比较。我们希望这将指导未来T2I模型的开发。