Rectified Flow (RF) models trained with a Flow matching framework have achieved state-of-the-art performance on Text-to-Image (T2I) conditional generation. Yet, multiple benchmarks show that synthetic images can still suffer from poor alignment with the prompt, i.e., images show wrong attribute binding, subject positioning, numeracy, etc. While the literature offers many methods to improve T2I alignment, they all consider only Diffusion Models, and require auxiliary datasets, scoring models, and linguistic analysis of the prompt. In this paper we aim to address these gaps. First, we introduce RFMI, a novel Mutual Information (MI) estimator for RF models that uses the pre-trained model itself for the MI estimation. Then, we investigate a self-supervised fine-tuning approach for T2I alignment based on RFMI that does not require auxiliary information other than the pre-trained model itself. Specifically, a fine-tuning set is constructed by selecting synthetic images generated from the pre-trained RF model and having high point-wise MI between images and prompts. Our experiments on MI estimation benchmarks demonstrate the validity of RFMI, and empirical fine-tuning on SD3.5-Medium confirms the effectiveness of RFMI for improving T2I alignment while maintaining image quality.
翻译:基于流匹配框架训练的整流流模型已在文本到图像条件生成任务中取得最先进的性能。然而,多项基准测试表明,合成图像仍可能面临与提示词对齐不佳的问题,例如图像出现错误的属性绑定、主体定位、数量表达等。尽管现有文献提供了多种改进文本到图像对齐的方法,但这些方法均仅针对扩散模型,且需要辅助数据集、评分模型以及对提示词进行语言学分析。本文旨在填补这些研究空白。首先,我们提出RFMI——一种面向整流流模型的新型互信息估计器,该估计器利用预训练模型自身进行互信息估计。随后,我们研究了一种基于RFMI的自监督微调方法以提升文本到图像对齐能力,该方法除预训练模型外无需任何辅助信息。具体而言,我们通过筛选预训练整流流模型生成的合成图像构建微调数据集,所选图像均与对应提示词具有较高的逐点互信息。在互信息估计基准测试上的实验验证了RFMI的有效性,而在SD3.5-Medium模型上的实证微调进一步证实了RFMI在保持图像质量的同时提升文本到图像对齐能力的优越性。