Diffusion models have demonstrated remarkable success in image restoration tasks. However, their multi-step denoising process introduces significant computational overhead, limiting their practical deployment. Furthermore, existing methods struggle to effectively remove severe JPEG artifact, especially in highly compressed images. To address these challenges, we propose CODiff, a compression-aware one-step diffusion model for JPEG artifact removal. The core of CODiff is the compression-aware visual embedder (CaVE), which extracts and leverages JPEG compression priors to guide the diffusion model. We propose a dual learning strategy that combines explicit and implicit learning. Specifically, explicit learning enforces a quality prediction objective to differentiate low-quality images with different compression levels. Implicit learning employs a reconstruction objective that enhances the model's generalization. This dual learning allows for a deeper and more comprehensive understanding of JPEG compression. Experimental results demonstrate that CODiff surpasses recent leading methods in both quantitative and visual quality metrics. The code and models will be released at https://github.com/jp-guo/CODiff.
翻译:扩散模型在图像复原任务中已展现出卓越成效。然而,其多步去噪过程引入了显著的计算开销,限制了实际部署。此外,现有方法难以有效去除严重的JPEG伪影,尤其是在高压缩图像中。为应对这些挑战,我们提出了CODiff——一种面向JPEG伪影去除的压缩感知单步扩散模型。CODiff的核心是压缩感知视觉嵌入器(CaVE),它提取并利用JPEG压缩先验来引导扩散模型。我们提出了一种结合显式与隐式学习的双重学习策略。具体而言,显式学习通过强制质量预测目标来区分不同压缩程度的低质量图像。隐式学习采用重建目标以增强模型的泛化能力。这种双重学习机制使得模型能更深入、更全面地理解JPEG压缩特性。实验结果表明,CODiff在定量指标与视觉质量评估上均超越了近期的主流方法。代码与模型将在 https://github.com/jp-guo/CODiff 发布。