Text-to-Image (T2I) models are capable of generating high-quality artistic creations and visual content. However, existing research and evaluation standards predominantly focus on image realism and shallow text-image alignment, lacking a comprehensive assessment of complex semantic understanding and world knowledge integration in text to image generation. To address this challenge, we propose $\textbf{WISE}$, the first benchmark specifically designed for $\textbf{W}$orld Knowledge-$\textbf{I}$nformed $\textbf{S}$emantic $\textbf{E}$valuation. WISE moves beyond simple word-pixel mapping by challenging models with 1000 meticulously crafted prompts across 25 sub-domains in cultural common sense, spatio-temporal reasoning, and natural science. To overcome the limitations of traditional CLIP metric, we introduce $\textbf{WiScore}$, a novel quantitative metric for assessing knowledge-image alignment. Through comprehensive testing of 20 models (10 dedicated T2I models and 10 unified multimodal models) using 1,000 structured prompts spanning 25 subdomains, our findings reveal significant limitations in their ability to effectively integrate and apply world knowledge during image generation, highlighting critical pathways for enhancing knowledge incorporation and application in next-generation T2I models. Code and data are available at https://github.com/PKU-YuanGroup/WISE.
翻译:文本到图像(T2I)模型能够生成高质量的艺术创作与视觉内容。然而,现有研究与评估标准主要关注图像真实性与浅层的图文对齐,缺乏对文本到图像生成中复杂语义理解与世界知识整合的全面评估。为应对这一挑战,我们提出首个专门面向**世界知识感知语义评估**的基准——**WISE**。WISE超越了简单的词-像素映射,通过涵盖文化常识、时空推理与自然科学三大领域、25个子领域的1000个精心构建的提示,对模型进行深入测试。为克服传统CLIP指标的局限,我们引入了**WiScore**,一种用于评估知识-图像对齐的新型量化指标。通过对20个模型(10个专用T2I模型与10个统一多模态模型)使用涵盖25个子领域的1000个结构化提示进行全面测试,我们的研究揭示了这些模型在图像生成过程中有效整合与应用世界知识方面存在显著局限,为提升下一代T2I模型的知识融合与应用能力指明了关键路径。代码与数据可在 https://github.com/PKU-YuanGroup/WISE 获取。