Diffusion models achieve great success in generating diverse and high-fidelity images, yet their widespread application, especially in real-time scenarios, is hampered by their inherently slow generation speed. The slow generation stems from the necessity of multi-step network inference. While some certain predictions benefit from the full computation of the model in each sampling iteration, not every iteration requires the same amount of computation, potentially leading to inefficient computation. Unlike typical adaptive computation challenges that deal with single-step generation problems, diffusion processes with a multi-step generation need to dynamically adjust their computational resource allocation based on the ongoing assessment of each step's importance to the final image output, presenting a unique set of challenges. In this work, we propose AdaDiff, an adaptive framework that dynamically allocates computation resources in each sampling step to improve the generation efficiency of diffusion models. To assess the effects of changes in computational effort on image quality, we present a timestep-aware uncertainty estimation module (UEM). Integrated at each intermediate layer, the UEM evaluates the predictive uncertainty. This uncertainty measurement serves as an indicator for determining whether to terminate the inference process. Additionally, we introduce an uncertainty-aware layer-wise loss aimed at bridging the performance gap between full models and their adaptive counterparts.
翻译:扩散模型在生成多样化和高保真图像方面取得了巨大成功,但其固有的缓慢生成速度阻碍了其广泛应用,尤其是在实时场景中。缓慢的生成源于多步网络推理的必要性。虽然某些特定预测得益于模型在每次采样迭代中的完整计算,但并非每次迭代都需要相同的计算量,这可能导致计算效率低下。与处理单步生成问题的典型自适应计算挑战不同,具有多步生成过程的扩散模型需要根据对每一步对最终图像输出重要性的持续评估来动态调整其计算资源分配,这带来了一系列独特的挑战。在本工作中,我们提出了AdaDiff,一种自适应框架,可在每个采样步骤中动态分配计算资源以提高扩散模型的生成效率。为了评估计算量变化对图像质量的影响,我们提出了一个时间步感知的不确定性估计模块(UEM)。UEM集成在每个中间层,用于评估预测不确定性。这种不确定性度量作为决定是否终止推理过程的指标。此外,我们引入了一种不确定性感知的逐层损失,旨在弥合完整模型与其自适应对应模型之间的性能差距。