Text-to-image (T2I) models have made substantial progress in generating images from textual prompts. However, they frequently fail to produce images consistent with physical commonsense, a vital capability for applications in world simulation and everyday tasks. Current T2I evaluation benchmarks focus on metrics such as accuracy, bias, and safety, neglecting the evaluation of models' internal knowledge, particularly physical commonsense. To address this issue, we introduce PhyBench, a comprehensive T2I evaluation dataset comprising 700 prompts across 4 primary categories: mechanics, optics, thermodynamics, and material properties, encompassing 31 distinct physical scenarios. We assess 6 prominent T2I models, including proprietary models DALLE3 and Gemini, and demonstrate that incorporating physical principles into prompts enhances the models' ability to generate physically accurate images. Our findings reveal that: (1) even advanced models frequently err in various physical scenarios, except for optics; (2) GPT-4o, with item-specific scoring instructions, effectively evaluates the models' understanding of physical commonsense, closely aligning with human assessments; and (3) current T2I models are primarily focused on text-to-image translation, lacking profound reasoning regarding physical commonsense. We advocate for increased attention to the inherent knowledge within T2I models, beyond their utility as mere image generation tools. The code and data are available at https://github.com/OpenGVLab/PhyBench.
翻译:文本到图像(T2I)模型在根据文本提示生成图像方面取得了实质性进展。然而,它们经常无法生成与物理常识一致的图像,而这是世界模拟和日常任务应用中的一项关键能力。当前的T2I评估基准侧重于准确性、偏见和安全性等指标,忽视了对模型内部知识,特别是物理常识的评估。为解决这一问题,我们引入了PhyBench,一个全面的T2I评估数据集,包含700个提示,涵盖力学、光学、热力学和材料属性4个主要类别,涉及31个不同的物理场景。我们评估了6个主流的T2I模型,包括专有模型DALLE3和Gemini,并证明将物理原理融入提示可以增强模型生成物理准确图像的能力。我们的研究结果表明:(1)即使是先进模型也经常在各种物理场景中出错,光学场景除外;(2)配备特定项目评分指令的GPT-4o,能有效评估模型对物理常识的理解,其结果与人类评估高度一致;(3)当前的T2I模型主要侧重于文本到图像的转换,缺乏对物理常识的深度推理。我们主张应更多地关注T2I模型的内在知识,而不仅仅是将其视为图像生成工具。代码和数据可在 https://github.com/OpenGVLab/PhyBench 获取。