While text-to-visual models now produce photo-realistic images and videos, they struggle with compositional text prompts involving attributes, relationships, and higher-order reasoning such as logic and comparison. In this work, we conduct an extensive human study on GenAI-Bench to evaluate the performance of leading image and video generation models in various aspects of compositional text-to-visual generation. We also compare automated evaluation metrics against our collected human ratings and find that VQAScore -- a metric measuring the likelihood that a VQA model views an image as accurately depicting the prompt -- significantly outperforms previous metrics such as CLIPScore. In addition, VQAScore can improve generation in a black-box manner (without finetuning) via simply ranking a few (3 to 9) candidate images. Ranking by VQAScore is 2x to 3x more effective than other scoring methods like PickScore, HPSv2, and ImageReward at improving human alignment ratings for DALL-E 3 and Stable Diffusion, especially on compositional prompts that require advanced visio-linguistic reasoning. We will release a new GenAI-Rank benchmark with over 40,000 human ratings to evaluate scoring metrics on ranking images generated from the same prompt. Lastly, we discuss promising areas for improvement in VQAScore, such as addressing fine-grained visual details. We will release all human ratings (over 80,000) to facilitate scientific benchmarking of both generative models and automated metrics.
翻译:尽管文本到视觉模型现已能够生成逼真的图像和视频,但在处理涉及属性、关系以及逻辑与比较等高级推理的组合式文本提示时仍面临挑战。本研究通过在GenAI-Bench上进行大规模人工评估,系统考察了主流图像与视频生成模型在组合式文本到视觉生成多维度任务中的表现。我们还将自动化评估指标与收集的人工评分进行对比,发现VQAScore——一种通过视觉问答模型判断图像是否准确描绘提示的度量方法——显著优于CLIPScore等传统指标。此外,VQAScore能够以黑盒方式(无需微调)通过简单排序少量候选图像(3至9张)来优化生成结果。在提升DALL-E 3和Stable Diffusion生成结果与人类偏好对齐度方面,采用VQAScore排序的效果是PickScore、HPSv2及ImageReward等其他评分方法的2至3倍,尤其在需要高级视觉语言推理的组合式提示任务中表现突出。我们将发布包含超过40,000条人工评分的新基准GenAI-Rank,用于评估针对同一提示生成图像的排序指标性能。最后,我们探讨了VQAScore在细粒度视觉细节处理等方面的改进方向。所有人工评分数据(超80,000条)将予以公开,以促进生成模型与自动化评估指标的科研基准测试。