This paper presents SANA-Sprint, an efficient diffusion model for ultra-fast text-to-image (T2I) generation. SANA-Sprint is built on a pre-trained foundation model and augmented with hybrid distillation, dramatically reducing inference steps from 20 to 1-4. We introduce three key innovations: (1) We propose a training-free approach that transforms a pre-trained flow-matching model for continuous-time consistency distillation (sCM), eliminating costly training from scratch and achieving high training efficiency. Our hybrid distillation strategy combines sCM with latent adversarial distillation (LADD): sCM ensures alignment with the teacher model, while LADD enhances single-step generation fidelity. (2) SANA-Sprint is a unified step-adaptive model that achieves high-quality generation in 1-4 steps, eliminating step-specific training and improving efficiency. (3) We integrate ControlNet with SANA-Sprint for real-time interactive image generation, enabling instant visual feedback for user interaction. SANA-Sprint establishes a new Pareto frontier in speed-quality tradeoffs, achieving state-of-the-art performance with 7.59 FID and 0.74 GenEval in only 1 step - outperforming FLUX-schnell (7.94 FID / 0.71 GenEval) while being 10x faster (0.1s vs 1.1s on H100). It also achieves 0.1s (T2I) and 0.25s (ControlNet) latency for 1024 x 1024 images on H100, and 0.31s (T2I) on an RTX 4090, showcasing its exceptional efficiency and potential for AI-powered consumer applications (AIPC). Code and pre-trained models will be open-sourced.
翻译:本文提出SANA-Sprint,一种用于超快速文生图(T2I)的高效扩散模型。SANA-Sprint基于预训练的基础模型构建,并通过混合蒸馏增强,将推理步数从20步大幅减少至1-4步。我们引入了三项关键创新:(1)提出一种免训练方法,将预训练的流匹配模型转化为适用于连续时间一致性蒸馏的模型(sCM),避免了从头开始的昂贵训练,实现了高训练效率。我们的混合蒸馏策略将sCM与潜在对抗蒸馏(LADD)相结合:sCM确保与教师模型的对齐,而LADD则增强单步生成的保真度。(2)SANA-Sprint是一个统一的步数自适应模型,可在1-4步内实现高质量生成,无需针对特定步数进行训练,提高了效率。(3)我们将ControlNet与SANA-Sprint集成,实现实时交互式图像生成,为用户交互提供即时视觉反馈。SANA-Sprint在速度-质量权衡上建立了新的帕累托前沿,仅用1步即达到7.59 FID和0.74 GenEval的顶尖性能——优于FLUX-schnell(7.94 FID / 0.71 GenEval),同时速度快10倍(H100上0.1秒对比1.1秒)。在H100上,针对1024×1024图像,其延迟为0.1秒(T2I)和0.25秒(ControlNet);在RTX 4090上,T2I延迟为0.31秒,展示了其卓越的效率以及在AI驱动的消费级应用(AIPC)中的潜力。代码和预训练模型将开源。