While text-to-visual models now produce photo-realistic images and videos, they struggle with compositional text prompts involving attributes, relationships, and higher-order reasoning such as logic and comparison. In this work, we conduct an extensive human study on GenAI-Bench to evaluate the performance of leading image and video generation models in various aspects of compositional text-to-visual generation. We also compare automated evaluation metrics against our collected human ratings and find that VQAScore -- a metric measuring the likelihood that a VQA model views an image as accurately depicting the prompt -- significantly outperforms previous metrics such as CLIPScore. In addition, VQAScore can improve generation in a black-box manner (without finetuning) via simply ranking a few (3 to 9) candidate images. Ranking by VQAScore is 2x to 3x more effective than other scoring methods like PickScore, HPSv2, and ImageReward at improving human alignment ratings for DALL-E 3 and Stable Diffusion, especially on compositional prompts that require advanced visio-linguistic reasoning. We release a new GenAI-Rank benchmark with over 40,000 human ratings to evaluate scoring metrics on ranking images generated from the same prompt. Lastly, we discuss promising areas for improvement in VQAScore, such as addressing fine-grained visual details. We will release all human ratings (over 80,000) to facilitate scientific benchmarking of both generative models and automated metrics.
翻译:尽管文本到视觉模型现已能生成逼真的图像和视频,但在处理涉及属性、关系以及逻辑与比较等高级推理的组合式文本提示时仍面临困难。本研究基于GenAI-Bench开展了大规模人工评估,系统考察了主流图像与视频生成模型在组合式文本到视觉生成多维度上的性能表现。通过将自动化评估指标与我们收集的人工评分进行对比,发现VQAScore——一种衡量VQA模型将图像判定为准确描绘提示词可能性的指标——显著优于CLIPScore等传统指标。此外,VQAScore可通过简单排序少量(3至9张)候选图像,以黑盒方式(无需微调)优化生成结果。在提升DALL-E 3和Stable Diffusion生成结果与人类偏好对齐度方面,采用VQAScore排序的效果是PickScore、HPSv2及ImageReward等其他评分方法的2至3倍,尤其在需要高级视觉语言推理的组合式提示任务上表现突出。我们发布了包含40,000余条人工评分的新基准测试集GenAI-Rank,用于评估针对同提示生成图像的排序指标性能。最后,我们探讨了VQAScore在细粒度视觉细节处理等方面的改进空间。我们将公开全部人工评分数据(超80,000条),以促进生成模型与自动化评估指标的科学研究与基准测试。