Latent diffusion models (LDM) have revolutionized text-to-image generation, leading to the proliferation of various advanced models and diverse downstream applications. However, despite these significant advancements, current diffusion models still suffer from several limitations, including inferior visual quality, inadequate aesthetic appeal, and inefficient inference, without a comprehensive solution in sight. To address these challenges, we present UniFL, a unified framework that leverages feedback learning to enhance diffusion models comprehensively. UniFL stands out as a universal, effective, and generalizable solution applicable to various diffusion models, such as SD1.5 and SDXL. Notably, UniFL consists of three key components: perceptual feedback learning, which enhances visual quality; decoupled feedback learning, which improves aesthetic appeal; and adversarial feedback learning, which accelerates inference. In-depth experiments and extensive user studies validate the superior performance of our method in enhancing generation quality and inference acceleration. For instance, UniFL surpasses ImageReward by 17% user preference in terms of generation quality and outperforms LCM and SDXL Turbo by 57% and 20% general preference with 4-step inference.
翻译:潜在扩散模型(LDM)已彻底改变文本到图像的生成领域,推动了各类先进模型及多样化下游应用的涌现。然而,尽管取得了这些重大进展,现有扩散模型仍存在若干局限性,包括视觉质量欠佳、美学吸引力不足以及推理效率低下等问题,且目前缺乏全面的解决方案。为应对这些挑战,我们提出了UniFL,一个利用反馈学习全面增强扩散模型的统一框架。UniFL作为一种通用、高效且可泛化的解决方案,适用于SD1.5、SDXL等多种扩散模型。该框架包含三个核心组件:提升视觉质量的感知反馈学习、增强美学吸引力的解耦反馈学习,以及加速推理的对抗反馈学习。深入的实验与大规模用户研究证实,我们的方法在提升生成质量与加速推理方面均表现出优越性能。例如,在生成质量方面,UniFL的用户偏好度较ImageReward高出17%;在4步推理设置下,其综合偏好度分别超过LCM和SDXL Turbo达57%和20%。