In this paper, we aim to enhance the performance of SwiftBrush, a prominent one-step text-to-image diffusion model, to be competitive with its multi-step Stable Diffusion counterpart. Initially, we explore the quality-diversity trade-off between SwiftBrush and SD Turbo: the former excels in image diversity, while the latter excels in image quality. This observation motivates our proposed modifications in the training methodology, including better weight initialization and efficient LoRA training. Moreover, our introduction of a novel clamped CLIP loss enhances image-text alignment and results in improved image quality. Remarkably, by combining the weights of models trained with efficient LoRA and full training, we achieve a new state-of-the-art one-step diffusion model, achieving an FID of 8.14 and surpassing all GAN-based and multi-step Stable Diffusion models. The evaluation code is available at: https://github.com/vinairesearch/swiftbrushv2.
翻译:本文旨在提升SwiftBrush这一著名单步文生图扩散模型的性能,使其能够与多步Stable Diffusion模型相竞争。我们首先探究了SwiftBrush与SD Turbo在质量-多样性之间的权衡关系:前者在图像多样性方面表现优异,而后者在图像质量方面更胜一筹。这一观察促使我们对训练方法进行改进,包括采用更优的权重初始化策略和高效的LoRA训练技术。此外,我们提出的新型钳位CLIP损失函数增强了图文对齐能力,从而提升了图像生成质量。值得注意的是,通过融合高效LoRA训练与完整训练所得模型的权重,我们实现了当前最先进的单步扩散模型,其FID分数达到8.14,超越了所有基于GAN的方法及多步Stable Diffusion模型。评估代码已发布于:https://github.com/vinairesearch/swiftbrushv2。