While large-scale text-to-image diffusion models continue to improve in visual quality, their increasing scale has widened the gap between state-of-the-art models and on-device solutions. To address this gap, we introduce NanoFLUX, a 2.4B text-to-image flow-matching model distilled from 17B FLUX.1-Schnell using a progressive compression pipeline designed to preserve generation quality. Our contributions include: (1) A model compression strategy driven by pruning redundant components in the diffusion transformer, reducing its size from 12B to 2B; (2) A ResNet-based token downsampling mechanism that reduces latency by allowing intermediate blocks to operate on lower-resolution tokens while preserving high-resolution processing elsewhere; (3) A novel text encoder distillation approach that leverages visual signals from early layers of the denoiser during sampling. Empirically, NanoFLUX generates 512 x 512 images in approximately 2.5 seconds on mobile devices, demonstrating the feasibility of high-quality on-device text-to-image generation.
翻译:尽管大规模文本到图像扩散模型在视觉质量上持续提升,但其不断增长的规模使得最先进模型与设备端解决方案之间的差距日益扩大。为弥合这一差距,我们提出了NanoFLUX,这是一个2.4B参数的文本到图像流匹配模型,通过渐进式压缩流程从17B参数的FLUX.1-Schnell蒸馏而来,旨在保持生成质量。我们的贡献包括:(1)一种以剪枝扩散Transformer中冗余组件为核心的模型压缩策略,将其规模从12B压缩至2B;(2)一种基于ResNet的令牌下采样机制,通过允许中间块在低分辨率令牌上操作以降低延迟,同时在其他部分保持高分辨率处理;(3)一种新颖的文本编码器蒸馏方法,该方法在采样过程中利用去噪器早期层的视觉信号进行指导。实验表明,NanoFLUX在移动设备上生成512×512图像仅需约2.5秒,证明了高质量设备端文本到图像生成的可行性。