The recent advancements in text-to-image generative models have been remarkable. Yet, the field suffers from a lack of evaluation metrics that accurately reflect the performance of these models, particularly lacking fine-grained metrics that can guide the optimization of the models. In this paper, we propose EvalAlign, a metric characterized by its accuracy, stability, and fine granularity. Our approach leverages the capabilities of Multimodal Large Language Models (MLLMs) pre-trained on extensive datasets. We develop evaluation protocols that focus on two key dimensions: image faithfulness and text-image alignment. Each protocol comprises a set of detailed, fine-grained instructions linked to specific scoring options, enabling precise manual scoring of the generated images. We Supervised Fine-Tune (SFT) the MLLM to align closely with human evaluative judgments, resulting in a robust evaluation model. Our comprehensive tests across 24 text-to-image generation models demonstrate that EvalAlign not only provides superior metric stability but also aligns more closely with human preferences than existing metrics, confirming its effectiveness and utility in model assessment.
翻译:近年来,文本到图像生成模型取得了显著进展。然而,该领域缺乏能够准确反映这些模型性能的评估指标,尤其缺乏能够指导模型优化的细粒度指标。本文提出EvalAlign,一种以准确性、稳定性和细粒度性为特征的评估指标。我们的方法利用了在广泛数据集上预训练的多模态大语言模型的能力。我们开发了专注于两个关键维度的评估协议:图像忠实度和图文对齐度。每个协议包含一组与特定评分选项相关联的详细、细粒度指令,从而能够对生成图像进行精确的人工评分。我们通过监督微调使MLLM与人类评估判断紧密对齐,从而得到一个稳健的评估模型。我们在24个文本到图像生成模型上的全面测试表明,EvalAlign不仅提供了更优的指标稳定性,而且比现有指标更贴近人类偏好,证实了其在模型评估中的有效性和实用性。