Recent studies have demonstrated that learning a meaningful internal representation can accelerate generative training. However, existing approaches necessitate to either introduce an off-the-shelf external representation task or rely on a large-scale, pre-trained external representation encoder to provide representation guidance during the training process. In this study, we posit that the unique discriminative process inherent to diffusion transformers enables them to offer such guidance without requiring external representation components. We propose SelfRepresentation Alignment (SRA), a simple yet effective method that obtains representation guidance using the internal representations of learned diffusion transformer. SRA aligns the latent representation of the diffusion transformer in the earlier layer conditioned on higher noise to that in the later layer conditioned on lower noise to progressively enhance the overall representation learning during only the training process. Experimental results indicate that applying SRA to DiTs and SiTs yields consistent performance improvements, and largely outperforms approaches relying on auxiliary representation task. Our approach achieves performance comparable to methods that are dependent on an external pre-trained representation encoder, which demonstrates the feasibility of acceleration with representation alignment in diffusion transformers themselves.
翻译:近期研究表明,学习有意义的内部表征可以加速生成式训练。然而,现有方法需要在训练过程中引入现成的外部表征任务,或依赖大规模预训练的外部表征编码器来提供表征指导。在本研究中,我们提出扩散Transformer固有的独特判别过程使其能够在无需外部表征组件的情况下提供此类指导。我们提出自表征对齐方法,这是一种简单而有效的技术,利用已学习的扩散Transformer的内部表征来获取表征指导。SRA将扩散Transformer在较高噪声条件下早期层的潜在表征,与在较低噪声条件下后期层的潜在表征进行对齐,从而仅在训练过程中逐步增强整体表征学习。实验结果表明,将SRA应用于DiT和SiT模型能带来一致的性能提升,且显著优于依赖辅助表征任务的方法。我们的方法实现了与依赖外部预训练表征编码器的方法相当的性能,这证明了利用扩散Transformer自身进行表征对齐加速的可行性。