Diffusion models (DMs) as generative priors have recently shown great potential for denoising tasks but lack theoretical understanding with respect to their mean square error (MSE) optimality. This paper proposes a novel denoising strategy inspired by the structure of the MSE-optimal conditional mean estimator (CME). The resulting DM-based denoiser can be conveniently employed using a pre-trained DM, being particularly fast by truncating reverse diffusion steps and not requiring stochastic re-sampling. We present a comprehensive (non-)asymptotic optimality analysis of the proposed diffusion-based denoiser, demonstrating polynomial-time convergence to the CME under mild conditions. Our analysis also derives a novel Lipschitz constant that depends solely on the DM's hyperparameters. Further, we offer a new perspective on DMs, showing that they inherently combine an asymptotically optimal denoiser with a powerful generator, modifiable by switching re-sampling in the reverse process on or off. The theoretical findings are thoroughly validated with experiments based on various benchmark datasets.
翻译:作为生成先验的扩散模型(DMs)在去噪任务中展现出巨大潜力,但其在均方误差(MSE)最优性方面尚缺乏理论理解。本文受MSE最优条件均值估计器(CME)结构的启发,提出了一种新颖的去噪策略。所提出的基于DM的去噪器可直接使用预训练的DM实现,通过截断反向扩散步骤实现快速计算,且无需随机重采样。我们对所提出的基于扩散的去噪器进行了全面的(非)渐近最优性分析,证明了在温和条件下其可在多项式时间内收敛至CME。我们的分析还推导出一个仅依赖于DM超参数的新型Lipschitz常数。此外,我们为DMs提供了一个新的视角,表明其本质上结合了一个渐近最优的去噪器与一个强大的生成器,通过开启或关闭反向过程中的重采样即可进行功能切换。基于多个基准数据集的实验充分验证了理论发现。