The practical performance of generative diffusion models depends on the appropriate choice of the noise scheduling function, which can also be equivalently expressed as a time reparameterization. In this paper, we present a time scheduler that selects sampling points based on entropy rather than uniform time spacing, ensuring that each point contributes an equal amount of information to the final generation. We prove that this time reparameterization does not depend on the initial choice of time. Furthermore, we provide a tractable exact formula to estimate this \emph{entropic time} for a trained model using the training loss without substantial overhead. Alongside the entropic time, inspired by the optimality results, we introduce a rescaled entropic time. In our experiments with mixtures of Gaussian distributions and ImageNet, we show that using the (rescaled) entropic times greatly improves the inference performance of trained models. In particular, we found that the image quality in pretrained EDM2 models, as evaluated by FID and FD-DINO scores, can be substantially increased by the rescaled entropic time reparameterization without increasing the number of function evaluations, with greater improvements in the few NFEs regime.
翻译:生成式扩散模型的实际性能取决于噪声调度函数的适当选择,该函数也可等价地表示为时间重参数化。本文提出一种基于熵而非均匀时间间隔选择采样点的时间调度器,确保每个采样点对最终生成贡献等量的信息。我们证明该时间重参数化不依赖于初始时间选择。此外,我们提供一种可处理的精确公式,利用训练损失在无需显著开销的情况下估计训练后模型的\emph{熵时间}。受最优性结果启发,我们在熵时间基础上进一步引入缩放熵时间。在高斯混合分布和ImageNet的实验中,我们证明使用(缩放)熵时间能显著提升训练后模型的推理性能。特别地,我们发现预训练EDM2模型的图像质量(通过FID和FD-DINO分数评估)可通过缩放熵时间重参数化得到实质性提升,且无需增加函数评估次数,在低NFEs区域改进尤为显著。