Diffusion models show promise for dynamic scene deblurring; however, existing studies often fail to leverage the intrinsic nature of the blurring process within diffusion models, limiting their full potential. To address it, we present a Blur Diffusion Model (BlurDM), which seamlessly integrates the blur formation process into diffusion for image deblurring. Observing that motion blur stems from continuous exposure, BlurDM implicitly models the blur formation process through a dual-diffusion forward scheme, diffusing both noise and blur onto a sharp image. During the reverse generation process, we derive a dual denoising and deblurring formulation, enabling BlurDM to recover the sharp image by simultaneously denoising and deblurring, given pure Gaussian noise conditioned on the blurred image as input. Additionally, to efficiently integrate BlurDM into deblurring networks, we perform BlurDM in the latent space, forming a flexible prior generation network for deblurring. Extensive experiments demonstrate that BlurDM significantly and consistently enhances existing deblurring methods on four benchmark datasets. The project page is available at https://jin-ting-he.github.io/BlurDM/.
翻译:扩散模型在动态场景去模糊任务中展现出潜力;然而,现有研究往往未能充分利用扩散模型中模糊过程的内在特性,限制了其潜力的充分发挥。为解决这一问题,我们提出了一种模糊扩散模型(BlurDM),该模型将模糊形成过程无缝集成到扩散框架中,以实现图像去模糊。通过观察发现运动模糊源于连续曝光,BlurDM通过一种双扩散前向方案隐式建模模糊形成过程,将噪声和模糊同时扩散到清晰图像上。在反向生成过程中,我们推导出一种双重去噪与去模糊的公式,使得BlurDM能够在以模糊图像为条件的高斯噪声输入下,通过同时进行去噪和去模糊来恢复清晰图像。此外,为了将BlurDM高效集成到去模糊网络中,我们在潜在空间执行BlurDM,构建了一个灵活的先验生成网络用于去模糊。大量实验表明,BlurDM在四个基准数据集上显著且持续地提升了现有去模糊方法的性能。项目页面位于 https://jin-ting-he.github.io/BlurDM/。