We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion models, showing that external guidance on model intermediate representations accelerates training. We reconceptualize this paradigm by regularizing diffusion models with their own intermediate representations. Building on the observation that representation quality varies across diffusion model layers, we show that the most semantically rich representations can act as an intrinsic guidance for weaker ones, reducing the need for external supervision. Our approach, LayerSync, is a self-sufficient, plug-and-play regularizer term with no overhead on diffusion model training and generalizes beyond the visual domain to other modalities. LayerSync requires no pretrained models nor additional data. We extensively evaluate the method on image generation and demonstrate its applicability to other domains such as audio, video, and motion generation. We show that it consistently improves the generation quality and the training efficiency. For example, we speed up the training of flow-based transformer by over 8.75x on ImageNet dataset and improved the generation quality by 23.6%. The code is available at https://github.com/vita-epfl/LayerSync.
翻译:我们提出LayerSync,一种领域无关的方法,用于提升扩散模型的生成质量与训练效率。先前研究强调了生成质量与扩散模型所学表示之间的关联,表明对模型中间表示施加外部指导可加速训练。我们通过使用扩散模型自身的中间表示进行正则化,重新构建了这一范式。基于扩散模型各层表示质量存在差异的观察,我们证明最具语义丰富性的表示可作为较弱表示的内在指导,从而减少对外部监督的需求。我们的方法LayerSync是一种自足、即插即用的正则化项,不会增加扩散模型训练开销,并可泛化至视觉领域之外的其他模态。LayerSync无需预训练模型或额外数据。我们在图像生成任务上进行了广泛评估,并证明了其在音频、视频和运动生成等其他领域的适用性。实验表明该方法能持续提升生成质量与训练效率。例如,在ImageNet数据集上,我们将基于流的Transformer训练速度提升超过8.75倍,并将生成质量提高23.6%。代码发布于https://github.com/vita-epfl/LayerSync。