With the rapid advancement of large multimodal models (LMMs), recent text-to-image (T2I) models can generate high-quality images and demonstrate great alignment to short prompts. However, they still struggle to effectively understand and follow long and detailed prompts, displaying inconsistent generation. To address this challenge, we introduce LPG-Bench, a comprehensive benchmark for evaluating long-prompt-based text-to-image generation. LPG-Bench features 200 meticulously crafted prompts with an average length of over 250 words, approaching the input capacity of several leading commercial models. Using these prompts, we generate 2,600 images from 13 state-of-the-art models and further perform comprehensive human-ranked annotations. Based on LPG-Bench, we observe that state-of-the-art T2I alignment evaluation metrics exhibit poor consistency with human preferences on long-prompt-based image generation. To address the gap, we introduce a novel zero-shot metric based on text-to-image-to-text consistency, termed TIT, for evaluating long-prompt-generated images. The core concept of TIT is to quantify T2I alignment by directly comparing the consistency between the raw prompt and the LMM-produced description on the generated image, which includes an efficient score-based instantiation TIT-Score and a large-language-model (LLM) based instantiation TIT-Score-LLM. Extensive experiments demonstrate that our framework achieves superior alignment with human judgment compared to CLIP-score, LMM-score, etc., with TIT-Score-LLM attaining a 7.31% absolute improvement in pairwise accuracy over the strongest baseline. LPG-Bench and TIT methods together offer a deeper perspective to benchmark and foster the development of T2I models. All resources will be made publicly available.
翻译:随着大型多模态模型(LMMs)的快速发展,近期的文本到图像(T2I)模型已能生成高质量图像,并在短提示上展现出良好的对齐能力。然而,它们在有效理解和遵循冗长、详细的提示方面仍存在困难,表现出不一致的生成效果。为应对这一挑战,我们引入了LPG-Bench,一个用于评估基于长提示的文本到图像生成的综合性基准。LPG-Bench包含200个精心构建的提示,平均长度超过250词,接近多个主流商业模型的输入容量上限。利用这些提示,我们从13个前沿模型中生成了2,600张图像,并进一步进行了全面的人工排序标注。基于LPG-Bench,我们观察到当前最先进的T2I对齐评估指标在基于长提示的图像生成任务上与人类偏好的一致性较差。为弥补这一差距,我们引入了一种基于文本-图像-文本一致性的新型零样本度量方法,称为TIT,用于评估长提示生成的图像。TIT的核心思想是通过直接比较原始提示与LMM对生成图像所产生描述之间的一致性来量化T2I对齐度,具体包括一个高效的基于分数的实例化TIT-Score和一个基于大语言模型(LLM)的实例化TIT-Score-LLM。大量实验表明,相较于CLIP-score、LMM-score等方法,我们的框架与人类判断实现了更优的对齐,其中TIT-Score-LLM在成对准确率上相比最强基线获得了7.31%的绝对提升。LPG-Bench与TIT方法共同为基准测试和推动T2I模型的发展提供了更深入的视角。所有资源将公开提供。