While zero-shot diffusion-based compression methods have seen significant progress in recent years, they remain notoriously slow and computationally demanding. This paper presents an efficient zero-shot diffusion-based compression method that runs substantially faster than existing methods, while maintaining performance that is on par with the state-of-the-art techniques. Our method builds upon the recently proposed Denoising Diffusion Codebook Models (DDCMs) compression scheme. Specifically, DDCM compresses an image by sequentially choosing the diffusion noise vectors from reproducible random codebooks, guiding the denoiser's output to reconstruct the target image. We modify this framework with Turbo-DDCM, which efficiently combines a large number of noise vectors at each denoising step, thereby significantly reducing the number of required denoising operations. This modification is also coupled with an improved encoding protocol. Furthermore, we introduce two flexible variants of Turbo-DDCM, a priority-aware variant that prioritizes user-specified regions and a distortion-controlled variant that compresses an image based on a target PSNR rather than a target BPP. Comprehensive experiments position Turbo-DDCM as a compelling, practical, and flexible image compression scheme.
翻译:尽管近年来基于无训练扩散的压缩方法取得了显著进展,但其计算速度仍较慢且资源消耗大。本文提出一种高效的无训练扩散压缩方法,运行速度显著优于现有方法,同时性能可与最先进技术相媲美。该方法基于近期提出的降噪扩散码本模型(DDCM)压缩框架:DDCM通过从可复现随机码本中顺序选择扩散噪声向量,引导降噪器的输出以重建目标图像。我们通过Turbo-DDCM对该框架进行改进,该方案在每个降噪步骤中高效组合大量噪声向量,从而大幅减少所需的降噪操作次数。此改进还伴随编码协议的优化。此外,我们引入两种灵活的Turbo-DDCM变体:支持用户指定区域优先级的感知变体,以及基于目标PSNR(而非目标BPP)进行压缩的失真控制变体。综合实验证明,Turbo-DDCM是一种高效、实用且灵活的图像压缩方案。