Inspired by the success of the text-to-image (T2I) generation task, many researchers are devoting themselves to the text-to-video (T2V) generation task. Most of the T2V frameworks usually inherit from the T2I model and add extra-temporal layers of training to generate dynamic videos, which can be viewed as a fine-tuning task. However, the traditional 3D-Unet is a serial mode and the temporal layers follow the spatial layers, which will result in high GPU memory and training time consumption according to its serial feature flow. We believe that this serial mode will bring more training costs with the large diffusion model and massive datasets, which are not environmentally friendly and not suitable for the development of the T2V. Therefore, we propose a highly efficient spatial-temporal parallel training paradigm for T2V tasks, named Mobius. In our 3D-Unet, the temporal layers and spatial layers are parallel, which optimizes the feature flow and backpropagation. The Mobius will save 24% GPU memory and 12% training time, which can greatly improve the T2V fine-tuning task and provide a novel insight for the AIGC community. We will release our codes in the future.
翻译:受文本到图像(T2I)生成任务成功的启发,众多研究者正致力于文本到视频(T2V)生成任务的研究。大多数T2V框架通常继承自T2I模型,并通过添加额外的时序层进行训练以生成动态视频,这可被视为一种微调任务。然而,传统的3D-Unet采用串行模式,其时序层紧随空间层之后,这种串行特征流将导致较高的GPU内存消耗和训练时间开销。我们认为,随着大规模扩散模型和海量数据集的应用,这种串行模式将带来更高的训练成本,既不环保也不利于T2V技术的发展。为此,我们提出了一种面向T2V任务的高效时空并行训练范式,命名为Mobius。在我们的3D-Unet中,时序层与空间层并行排列,从而优化了特征流和反向传播过程。Mobius能够节省24%的GPU内存和12%的训练时间,可极大改善T2V微调任务,并为AIGC领域提供新的研究视角。我们将在未来发布相关代码。