The recent advent of powerful video generation models, such as Hunyuan, WanX, Veo3, and Kling, has inaugurated a new era in the field. However, the practical deployment of these models is severely impeded by their substantial computational overhead, which stems from enormous parameter counts and the iterative, multi-step sampling process required during inference. Prior research on accelerating generative models has predominantly followed two distinct trajectories: reducing the number of sampling steps (e.g., LCM, DMD, and MagicDistillation) or compressing the model size for more efficient inference (e.g., ICMD). The potential of simultaneously compressing both to create a fast and lightweight model remains an unexplored avenue. In this paper, we propose FastLightGen, an algorithm that transforms large, computationally expensive models into fast, lightweight counterparts. The core idea is to construct an optimal teacher model, one engineered to maximize student performance, within a synergistic framework for distilling both model size and inference steps. Our extensive experiments on HunyuanVideo-ATI2V and WanX-TI2V reveal that a generator using 4-step sampling and 30\% parameter pruning achieves optimal visual quality under a constrained inference budget. Furthermore, FastLightGen consistently outperforms all competing methods, establishing a new state-of-the-art in efficient video generation.
翻译:近期,Hunyuan、WanX、Veo3、Kling等强大视频生成模型的出现开启了该领域的新纪元。然而,这些模型在实际部署中受到巨大计算开销的严重制约,这种开销源于庞大的参数量以及推理过程中所需的迭代式多步采样过程。以往关于加速生成模型的研究主要遵循两条独立路径:减少采样步数(如LCM、DMD、MagicDistillation)或压缩模型规模以实现高效推理(如ICMD)。同时压缩两者以创建快速轻量模型的潜力仍是一条未被探索的途径。本文提出FastLightGen算法,其能够将庞大且计算昂贵的模型转化为快速轻量的对应版本。核心思想是在一个协同蒸馏模型规模与推理步数的框架内,构建经过优化设计以最大化学生模型性能的教师模型。我们在HunyuanVideo-ATI2V和WanX-TI2V数据集上的大量实验表明,在受限推理预算下,采用4步采样与30%参数剪枝的生成器能获得最优视觉质量。此外,FastLightGen始终优于所有竞争方法,为高效视频生成确立了新的技术标杆。