High computational overhead is a troublesome problem for diffusion models. Recent studies have leveraged post-training quantization (PTQ) to compress diffusion models. However, most of them only focus on unconditional models, leaving the quantization of widely-used pretrained text-to-image models, e.g., Stable Diffusion, largely unexplored. In this paper, we propose a novel post-training quantization method PCR (Progressive Calibration and Relaxing) for text-to-image diffusion models, which consists of a progressive calibration strategy that considers the accumulated quantization error across timesteps, and an activation relaxing strategy that improves the performance with negligible cost. Additionally, we demonstrate the previous metrics for text-to-image diffusion model quantization are not accurate due to the distribution gap. To tackle the problem, we propose a novel QDiffBench benchmark, which utilizes data in the same domain for more accurate evaluation. Besides, QDiffBench also considers the generalization performance of the quantized model outside the calibration dataset. Extensive experiments on Stable Diffusion and Stable Diffusion XL demonstrate the superiority of our method and benchmark. Moreover, we are the first to achieve quantization for Stable Diffusion XL while maintaining the performance.
翻译:高计算开销是扩散模型面临的一个棘手问题。近期研究利用后训练量化(PTQ)来压缩扩散模型。然而,大多数工作仅关注无条件模型,对广泛使用的预训练文本到图像模型(如Stable Diffusion)的量化问题尚未充分探索。本文提出一种针对文本到图像扩散模型的新型后训练量化方法PCR(渐进校准与松弛),该方法包含考虑时间步间累积量化误差的渐进校准策略,以及以可忽略成本提升性能的激活松弛策略。此外,我们指出由于分布差异,现有文本到图像扩散模型量化评估指标存在不准确问题。为此,我们提出新型评估基准QDiffBench,其利用同领域数据进行更精确的评估。QDiffBench同时考虑了量化模型在校准数据集外的泛化性能。在Stable Diffusion和Stable Diffusion XL上的大量实验证明了本方法与基准的优越性。此外,我们首次实现了在保持性能的前提下对Stable Diffusion XL的量化。