Text-to-image generation of Stable Diffusion models has achieved notable success due to its remarkable generation ability. However, the repetitive denoising process is computationally intensive during inference, which renders Diffusion models less suitable for real-world applications that require low latency and scalability. Recent studies have employed post-training quantization (PTQ) and quantization-aware training (QAT) methods to compress Diffusion models. Nevertheless, prior research has often neglected to examine the consistency between results generated by quantized models and those from floating-point models. This consistency is crucial in fields such as content creation, design, and edge deployment, as it can significantly enhance both efficiency and system stability for practitioners. To ensure that quantized models generate high-quality and consistent images, we propose an efficient quantization framework for Stable Diffusion models. Our approach features a Serial-to-Parallel calibration pipeline that addresses the consistency of both the calibration and inference processes, as well as ensuring training stability. Based on this pipeline, we further introduce a mix-precision quantization strategy, multi-timestep activation quantization, and time information precalculation techniques to ensure high-fidelity generation in comparison to floating-point models. Through extensive experiments with Stable Diffusion v1-4, v2-1, and XL 1.0, we have demonstrated that our method outperforms the current state-of-the-art techniques when tested on prompts from the COCO validation dataset and the Stable-Diffusion-Prompts dataset. Under W4A8 quantization settings, our approach enhances both distribution similarity and visual similarity by 45%-60%.
翻译:Stable Diffusion模型的文本到图像生成因其卓越的生成能力已取得显著成功。然而,推理过程中重复的去噪步骤计算密集,这使得Diffusion模型难以适用于需要低延迟和高可扩展性的实际应用场景。近期研究采用训练后量化(PTQ)与量化感知训练(QAT)方法来压缩Diffusion模型。然而,先前工作往往忽视了对量化模型与浮点模型生成结果一致性的考察。这种一致性在内容创作、设计及边缘部署等领域至关重要,因为它能显著提升实践者的效率与系统稳定性。为确保量化模型生成高质量且一致的图像,我们提出了一种面向Stable Diffusion的高效量化框架。我们的方法采用串行至并行校准流程,该流程同时解决了校准过程与推理过程的一致性,并确保了训练稳定性。基于此流程,我们进一步引入了混合精度量化策略、多时间步激活量化技术以及时间信息预计算方法,以保证相较于浮点模型的高保真度生成。通过对Stable Diffusion v1-4、v2-1及XL 1.0的广泛实验,我们在COCO验证数据集和Stable-Diffusion-Prompts数据集的提示词上验证了本方法优于当前最先进技术。在W4A8量化配置下,我们的方法将分布相似度与视觉相似度提升了45%-60%。