Generative modeling can be formulated as learning a mapping f such that its pushforward distribution matches the data distribution. The pushforward behavior can be carried out iteratively at inference time, for example in diffusion and flow-based models. In this paper, we propose a new paradigm called Drifting Models, which evolve the pushforward distribution during training and naturally admit one-step inference. We introduce a drifting field that governs the sample movement and achieves equilibrium when the distributions match. This leads to a training objective that allows the neural network optimizer to evolve the distribution. In experiments, our one-step generator achieves state-of-the-art results on ImageNet at 256 x 256 resolution, with an FID of 1.54 in latent space and 1.61 in pixel space. We hope that our work opens up new opportunities for high-quality one-step generation.
翻译:生成建模可被形式化为学习一个映射f,使得其前推分布与数据分布相匹配。前推行为可在推理时迭代执行,例如在扩散模型和基于流的模型中。本文提出一种称为漂移模型的新范式,其在训练期间演化前推分布,并自然地允许单步推理。我们引入一个漂移场来控制样本运动,并在分布匹配时达到平衡。这导出了一个训练目标,使得神经网络优化器能够演化分布。在实验中,我们的单步生成器在256×256分辨率的ImageNet数据集上取得了最先进的结果,在隐空间的FID为1.54,在像素空间的FID为1.61。我们希望这项工作能为高质量单步生成开辟新的可能性。