Diffusion models are relatively easy to train but require many steps to generate samples. Consistency models are far more difficult to train, but generate samples in a single step. In this paper we propose Multistep Consistency Models: A unification between Consistency Models (Song et al., 2023) and TRACT (Berthelot et al., 2023) that can interpolate between a consistency model and a diffusion model: a trade-off between sampling speed and sampling quality. Specifically, a 1-step consistency model is a conventional consistency model whereas we show that a $\infty$-step consistency model is a diffusion model. Multistep Consistency Models work really well in practice. By increasing the sample budget from a single step to 2-8 steps, we can train models more easily that generate higher quality samples, while retaining much of the sampling speed benefits. Notable results are 1.4 FID on Imagenet 64 in 8 step and 2.1 FID on Imagenet128 in 8 steps with consistency distillation. We also show that our method scales to a text-to-image diffusion model, generating samples that are very close to the quality of the original model.
翻译:扩散模型易于训练,但生成样本需要大量步数。一致性模型训练难度显著更高,却能单步生成样本。本文提出多步一致性模型:融合一致性模型(Song et al., 2023)与TRACT(Berthelot et al., 2023)的统一框架,可在一致性模型与扩散模型之间实现插值——在采样速度与采样质量间进行权衡。具体而言,1步一致性模型即为传统一致性模型,而我们证明$\infty$步一致性模型等价于扩散模型。多步一致性模型在实践中表现优异。通过将采样预算从单步扩展至2-8步,我们可更轻松地训练生成更高质量样本的模型,同时保留大部分采样速度优势。值得关注的成果包括:使用一致性蒸馏方法,在ImageNet 64数据集上以8步达到1.4 FID,在ImageNet128数据集上以8步达到2.1 FID。我们还展示了该方法可扩展至文本到图像扩散模型,生成的样本质量与原始模型极为接近。