As scaling laws in generative AI push performance, they also simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to address this bottleneck by demonstrating very low-cost training of large-scale T2I diffusion transformer models. As the computational cost of transformers increases with the number of patches in each image, we propose to randomly mask up to 75% of the image patches during training. We propose a deferred masking strategy that preprocesses all patches using a patch-mixer before masking, thus significantly reducing the performance degradation with masking, making it superior to model downscaling in reducing computational cost. We also incorporate the latest improvements in transformer architecture, such as the use of mixture-of-experts layers, to improve performance and further identify the critical benefit of using synthetic images in micro-budget training. Finally, using only 37M publicly available real and synthetic images, we train a 1.16 billion parameter sparse transformer with only \$1,890 economical cost and achieve a 12.7 FID in zero-shot generation on the COCO dataset. Notably, our model achieves competitive FID and high-quality generations while incurring 118$\times$ lower cost than stable diffusion models and 14$\times$ lower cost than the current state-of-the-art approach that costs \$28,400. We aim to release our end-to-end training pipeline to further democratize the training of large-scale diffusion models on micro-budgets.
翻译:随着生成式人工智能中规模定律推动性能提升,这些模型的开发也日益集中于拥有大规模计算资源的参与者。本文聚焦于文本到图像生成模型,旨在通过展示大规模T2I扩散Transformer模型的极低成本训练来突破这一瓶颈。鉴于Transformer的计算成本随每张图像中图像块数量的增加而上升,我们提出在训练过程中随机掩蔽高达75%的图像块。我们设计了一种延迟掩蔽策略,在掩蔽前通过块混合器对所有图像块进行预处理,从而显著降低了掩蔽带来的性能损失,使其在降低计算成本方面优于模型缩放方法。同时,我们融合了Transformer架构的最新改进,例如使用专家混合层来提升性能,并进一步揭示了在微预算训练中使用合成图像的关键优势。最终,仅使用3700万公开可用的真实与合成图像,我们以仅1,890美元的经济成本训练了一个11.6亿参数的稀疏Transformer模型,在COCO数据集上实现了12.7的零样本生成FID分数。值得注意的是,我们的模型在实现具有竞争力的FID分数和高质量生成结果的同时,其成本比稳定扩散模型低118倍,比当前成本为28,400美元的最先进方法低14倍。我们计划发布端到端的训练流程,以进一步推动微预算下大规模扩散模型训练的普及化。