Inspired by the success of the text-to-image (T2I) generation task, many researchers are devoting themselves to the text-to-video (T2V) generation task. Most of the T2V frameworks usually inherit from the T2I model and add extra-temporal layers of training to generate dynamic videos, which can be viewed as a fine-tuning task. However, the traditional 3D-Unet is a serial mode and the temporal layers follow the spatial layers, which will result in high GPU memory and training time consumption according to its serial feature flow. We believe that this serial mode will bring more training costs with the large diffusion model and massive datasets, which are not environmentally friendly and not suitable for the development of the T2V. Therefore, we propose a highly efficient spatial-temporal parallel training paradigm for T2V tasks, named Mobius. In our 3D-Unet, the temporal layers and spatial layers are parallel, which optimizes the feature flow and backpropagation. The Mobius will save 24% GPU memory and 12% training time, which can greatly improve the T2V fine-tuning task and provide a novel insight for the AIGC community. We will release our codes in the future.
翻译:受文本到图像生成任务成功的启发,众多研究者正致力于文本到视频生成任务的研究。大多数文本到视频生成框架通常继承自文本到图像模型,并通过添加额外的时间层进行训练以生成动态视频,这可被视为一种微调任务。然而,传统的3D-Unet采用串行模式,时间层位于空间层之后,这种串行特征流将导致较高的GPU内存占用和训练时间消耗。我们认为,随着大规模扩散模型和海量数据集的应用,这种串行模式将带来更高的训练成本,既不环保也不利于文本到视频生成技术的发展。为此,我们提出了一种面向文本到视频生成任务的高效时空并行训练范式,命名为Mobius。在我们的3D-Unet中,时间层与空间层并行排列,从而优化了特征流与反向传播过程。Mobius可节省24%的GPU内存和12%的训练时间,能显著提升文本到视频微调任务的效率,并为AIGC领域提供新的研究视角。我们将在未来公开相关代码。