Text-to-Image (T2I) diffusion models have achieved remarkable success in image generation. Despite their progress, challenges remain in both prompt-following ability, image quality and lack of high-quality datasets, which are essential for refining these models. As acquiring labeled data is costly, we introduce AGFSync, a framework that enhances T2I diffusion models through Direct Preference Optimization (DPO) in a fully AI-driven approach. AGFSync utilizes Vision-Language Models (VLM) to assess image quality across style, coherence, and aesthetics, generating feedback data within an AI-driven loop. By applying AGFSync to leading T2I models such as SD v1.4, v1.5, and SDXL, our extensive experiments on the TIFA dataset demonstrate notable improvements in VQA scores, aesthetic evaluations, and performance on the HPSv2 benchmark, consistently outperforming the base models. AGFSync's method of refining T2I diffusion models paves the way for scalable alignment techniques.
翻译:文本到图像(T2I)扩散模型在图像生成领域取得了显著成功。尽管进展显著,但在提示遵循能力、图像质量以及高质量数据集(这对模型精炼至关重要)的缺乏方面仍存在挑战。由于获取标注数据成本高昂,我们提出了AGFSync框架,该框架通过完全由人工智能驱动的直接偏好优化(DPO)方法增强T2I扩散模型。AGFSync利用视觉语言模型(VLM)从风格、连贯性和美学角度评估图像质量,并在AI驱动循环中生成反馈数据。通过将AGFSync应用于主流T2I模型(如SD v1.4、v1.5和SDXL),我们在TIFA数据集上开展的大量实验表明,VQA评分、美学评估以及HPSv2基准测试的性能均显著提升,且持续优于基础模型。AGFSync精炼T2I扩散模型的方法为可扩展的对齐技术铺平了道路。