Text-to-Image (T2I) diffusion models have achieved remarkable success in image generation. Despite their progress, challenges remain in both prompt-following ability, image quality and lack of high-quality datasets, which are essential for refining these models. As acquiring labeled data is costly, we introduce AGFSync, a framework that enhances T2I diffusion models through Direct Preference Optimization (DPO) in a fully AI-driven approach. AGFSync utilizes Vision-Language Models (VLM) to assess image quality across style, coherence, and aesthetics, generating feedback data within an AI-driven loop. By applying AGFSync to leading T2I models such as SD v1.4, v1.5, and SDXL-base, our extensive experiments on the TIFA dataset demonstrate notable improvements in VQA scores, aesthetic evaluations, and performance on the HPSv2 benchmark, consistently outperforming the base models. AGFSync's method of refining T2I diffusion models paves the way for scalable alignment techniques. Our code and dataset are publicly available at https://anjingkun.github.io/AGFSync.
翻译:文本到图像(T2I)扩散模型在图像生成领域取得了显著成功。尽管取得了进展,但在提示跟随能力、图像质量以及缺乏高质量数据集方面仍存在挑战,而这些对于优化此类模型至关重要。由于获取标注数据成本高昂,我们提出了AGFSync框架,该框架通过完全AI驱动的方式,利用直接偏好优化(DPO)来增强T2I扩散模型。AGFSync利用视觉语言模型(VLM)在风格、连贯性和美学等多个维度评估图像质量,在AI驱动的循环中生成反馈数据。通过将AGFSync应用于SD v1.4、v1.5和SDXL-base等主流T2I模型,我们在TIFA数据集上进行的大量实验表明,模型在VQA分数、美学评估以及HPSv2基准测试上的表现均有显著提升,持续优于基础模型。AGFSync优化T2I扩散模型的方法为可扩展的对齐技术开辟了新路径。我们的代码和数据集已在https://anjingkun.github.io/AGFSync 公开提供。