Evaluating the quality of synthesized images remains a significant challenge in the development of text-to-image (T2I) generation. Most existing studies in this area primarily focus on evaluating text-image alignment, image quality, and object composition capabilities, with comparatively fewer studies addressing the evaluation of the factuality of T2I models, particularly when the concepts involved are knowledge-intensive. To mitigate this gap, we present T2I-FactualBench in this work - the largest benchmark to date in terms of the number of concepts and prompts specifically designed to evaluate the factuality of knowledge-intensive concept generation. T2I-FactualBench consists of a three-tiered knowledge-intensive text-to-image generation framework, ranging from the basic memorization of individual knowledge concepts to the more complex composition of multiple knowledge concepts. We further introduce a multi-round visual question answering (VQA) based evaluation framework to assess the factuality of three-tiered knowledge-intensive text-to-image generation tasks. Experiments on T2I-FactualBench indicate that current state-of-the-art (SOTA) T2I models still leave significant room for improvement.
翻译:评估合成图像的质量仍然是文本到图像(T2I)生成领域发展中的一项重大挑战。该领域现有的大多数研究主要侧重于评估文本-图像对齐、图像质量和物体组合能力,而针对T2I模型事实性评估的研究相对较少,尤其是在涉及知识密集型概念时。为弥补这一不足,我们在本工作中提出了T2I-FactualBench——这是迄今为止在概念数量和提示词方面规模最大的基准,专门用于评估知识密集型概念生成的事实性。T2I-FactualBench包含一个三层级的、知识密集型的文本到图像生成框架,范围从对单个知识概念的基本记忆,到对多个知识概念的更复杂组合。我们进一步引入了一个基于多轮视觉问答(VQA)的评估框架,用以评估这三层知识密集型文本到图像生成任务的事实性。在T2I-FactualBench上的实验表明,当前最先进的(SOTA)T2I模型仍有巨大的改进空间。