Evaluating text-to-image synthesis is challenging due to misalignment between established metrics and human preferences. We propose cFreD, a metric based on the notion of Conditional Fr\'echet Distance that explicitly accounts for both visual fidelity and text-prompt alignment. Existing metrics such as Inception Score (IS), Fr\'echet Inception Distance (FID) and CLIPScore assess either image quality or image-text alignment but not both which limits their correlation with human preferences. Scoring models explicitly trained to replicate human preferences require constant updates and may not generalize to novel generation techniques or out-of-domain inputs. Through extensive experiments across multiple recently proposed text-to-image models and diverse prompt datasets, we demonstrate that cFreD exhibits a higher correlation with human judgments compared to statistical metrics, including metrics trained with human preferences. Our findings validate cFreD as a robust, future-proof metric for the systematic evaluation of text-to-image models, standardizing benchmarking in this rapidly evolving field. We release our evaluation toolkit and benchmark in the appendix.
翻译:评估文本到图像合成具有挑战性,主要由于现有指标与人类偏好之间存在偏差。我们提出cFreD,一种基于条件弗雷歇距离概念的度量指标,该指标明确考虑了视觉保真度与文本提示对齐性。现有指标如初始分数、弗雷歇初始距离和CLIP分数仅评估图像质量或图文对齐性,而未能兼顾两者,这限制了它们与人类偏好的相关性。专门训练以模拟人类偏好的评分模型需要持续更新,且可能无法泛化到新颖的生成技术或领域外输入。通过对多个近期提出的文本到图像模型及多样化提示数据集进行广泛实验,我们证明cFreD相较于统计指标(包括基于人类偏好训练的指标)与人类判断具有更高的相关性。我们的研究验证了cFreD作为评估文本到图像模型的稳健且面向未来的度量标准,能够为这一快速发展的领域提供标准化的基准测试。我们在附录中发布了评估工具包和基准数据集。