The ability to understand visual concepts and replicate and compose these concepts from images is a central goal for computer vision. Recent advances in text-to-image (T2I) models have lead to high definition and realistic image quality generation by learning from large databases of images and their descriptions. However, the evaluation of T2I models has focused on photorealism and limited qualitative measures of visual understanding. To quantify the ability of T2I models in learning and synthesizing novel visual concepts (a.k.a. personalized T2I), we introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts, and 33K composite text prompts. Along with the dataset, we propose an evaluation metric, Concept Confidence Deviation (CCD), that uses the confidence of oracle concept classifiers to measure the alignment between concepts generated by T2I generators and concepts contained in target images. We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions. Our human study shows that CCD is highly correlated with human understanding of concepts. Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome. The data, code, and interactive demo is available at: https://conceptbed.github.io/
翻译:视觉概念的理解能力以及从图像中复制和组合这些概念,是计算机视觉的核心目标。文本到图像(T2I)模型的最新进展通过从大规模图像和描述数据库中学习,实现了高清且逼真的图像质量生成。然而,T2I模型的评估主要集中在照片真实感以及对视觉理解的有限定性测量上。为了量化T2I模型在学习和合成新视觉概念(即个性化T2I)方面的能力,我们引入了概念床——一个大规模数据集,包含284个独特视觉概念和33K个复合文本提示。伴随该数据集,我们提出一个评估指标——概念置信度偏差(CCD),该指标利用神谕概念分类器的置信度来测量T2I生成器生成的概念与目标图像中包含的概念之间的一致性。我们评估了涉及对象、属性或风格的视觉概念,并同时考察了组合性的四个维度:计数、属性、关系和动作。我们的用户研究表明,CCD与人类对概念的理解高度相关。我们的结果揭示了学习概念与保持组合性之间存在权衡,现有方法难以克服这一挑战。数据、代码和交互式演示可在以下网址获取:https://conceptbed.github.io/