Generative modeling can be formulated as learning a mapping f such that its pushforward distribution matches the data distribution. The pushforward behavior can be carried out iteratively at inference time, for example in diffusion and flow-based models. In this paper, we propose a new paradigm called Drifting Models, which evolve the pushforward distribution during training and naturally admit one-step inference. We introduce a drifting field that governs the sample movement and achieves equilibrium when the distributions match. This leads to a training objective that allows the neural network optimizer to evolve the distribution. In experiments, our one-step generator achieves state-of-the-art results on ImageNet at 256 x 256 resolution, with an FID of 1.54 in latent space and 1.61 in pixel space. We hope that our work opens up new opportunities for high-quality one-step generation.
翻译:生成式建模可表述为学习一个映射f,使其推前分布与数据分布相匹配。推前操作可在推理时迭代执行,例如在扩散模型和基于流的模型中。本文提出一种称为漂移模型的新范式,其在训练过程中演化推前分布,并自然支持单步推理。我们引入一个漂移场来控制样本运动,当分布匹配时达到平衡。这导出了一个训练目标,允许神经网络优化器演化分布。在实验中,我们的单步生成器在256×256分辨率的ImageNet数据集上取得了最先进的结果,潜在空间FID为1.54,像素空间FID为1.61。我们希望这项工作能为高质量单步生成开辟新的可能性。