The practical performance of generative diffusion models depends on the appropriate choice of the noise scheduling function, which can also be equivalently expressed as a time reparameterization. In this paper, we present a time scheduler that selects sampling points based on entropy rather than uniform time spacing, ensuring that each point contributes an equal amount of information to the final generation. We prove that this time reparameterization does not depend on the initial choice of time. Furthermore, we provide a tractable exact formula to estimate this \emph{entropic time} for a trained model using the training loss without substantial overhead. Alongside the entropic time, inspired by the optimality results, we introduce a rescaled entropic time. In our experiments with mixtures of Gaussian distributions and ImageNet, we show that using the (rescaled) entropic times greatly improves the inference performance of trained models. In particular, we found that the image quality in pretrained EDM2 models, as evaluated by FID and FD-DINO scores, can be substantially increased by the rescaled entropic time reparameterization without increasing the number of function evaluations, with greater improvements in the few NFEs regime. Code is available at https://github.com/DejanStancevic/Entropic-Time-Schedulers-for-Generative-Diffusion-Models.
翻译:生成扩散模型的实际性能取决于噪声调度函数的适当选择,该函数也可等价表示为时间重参数化。本文提出一种基于熵而非均匀时间间隔选择采样点的时间调度器,确保每个点对最终生成贡献等量的信息。我们证明该时间重参数化不依赖于初始时间选择。此外,我们提供了一个可处理的精确公式,利用训练损失在无需显著开销的情况下估计训练模型的\\emph{熵时间}。受最优性结果启发,我们在熵时间基础上引入了缩放熵时间。在高斯混合分布和ImageNet的实验中,使用(缩放)熵时间显著提升了训练模型的推理性能。特别地,我们发现预训练EDM2模型的图像质量(通过FID和FD-DINO分数评估)可通过缩放熵时间重参数化大幅提升,且无需增加函数评估次数,在低NFEs区域改进更为显著。代码发布于https://github.com/DejanStancevic/Entropic-Time-Schedulers-for-Generative-Diffusion-Models。