Recent advances in text-to-image generation have primarily relied on extensive datasets and parameter-heavy architectures. These requirements severely limit accessibility for researchers and practitioners who lack substantial computational resources. In this paper, we introduce \model, an efficient training paradigm for image generation models that uses knowledge distillation (KD) and Direct Preference Optimization (DPO). Drawing inspiration from the success of data KD techniques widely adopted in Multi-Modal Large Language Models (MLLMs), LightGen distills knowledge from state-of-the-art (SOTA) text-to-image models into a compact Masked Autoregressive (MAR) architecture with only $0.7B$ parameters. Using a compact synthetic dataset of just $2M$ high-quality images generated from varied captions, we demonstrate that data diversity significantly outweighs data volume in determining model performance. This strategy dramatically reduces computational demands and reduces pre-training time from potentially thousands of GPU-days to merely 88 GPU-days. Furthermore, to address the inherent shortcomings of synthetic data, particularly poor high-frequency details and spatial inaccuracies, we integrate the DPO technique that refines image fidelity and positional accuracy. Comprehensive experiments confirm that LightGen achieves image generation quality comparable to SOTA models while significantly reducing computational resources and expanding accessibility for resource-constrained environments. Code is available at https://github.com/XianfengWu01/LightGen
翻译:近年来,文本到图像生成领域的进展主要依赖于大规模数据集和参数量庞大的架构。这些要求严重限制了缺乏充足计算资源的研究者和实践者的可及性。本文提出一种高效的图像生成模型训练范式,该范式利用知识蒸馏和直接偏好优化。受多模态大语言模型中广泛采用的数据知识蒸馏技术成功的启发,LightGen 将最先进的文本到图像模型的知识蒸馏到一个仅含 $0.7B$ 参数的紧凑掩码自回归架构中。通过使用一个仅包含 $2M$ 张由多样化描述生成的高质量图像的紧凑合成数据集,我们证明了在决定模型性能时,数据多样性远胜于数据量。该策略显著降低了计算需求,并将预训练时间从可能数千 GPU-天减少至仅 88 GPU-天。此外,为了解决合成数据固有的缺陷,特别是高频细节缺失和空间位置不准确的问题,我们集成了直接偏好优化技术,以提升图像的保真度和位置准确性。全面的实验证实,LightGen 实现了与最先进模型相当的图像生成质量,同时显著减少了计算资源需求,并扩展了在资源受限环境下的可及性。代码发布于 https://github.com/XianfengWu01/LightGen。