We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model. By linking the encoder's output noise to the prior's minimum noise level, we obtain a simple training objective that provides a tight upper bound on the latent bitrate. On ImageNet-512, our approach achieves competitive FID of 1.4, with high reconstruction quality (PSNR) while requiring fewer training FLOPs than models trained on Stable Diffusion latents. On Kinetics-600, we set a new state-of-the-art FVD of 1.3.
翻译:我们提出了统一潜变量(UL)框架,用于学习由扩散先验联合正则化并由扩散模型解码的潜变量表示。通过将编码器的输出噪声与先验的最小噪声水平关联,我们获得了一个简单的训练目标,该目标为潜变量比特率提供了紧致上界。在ImageNet-512数据集上,我们的方法取得了1.4的竞争性FID分数,同时保持高重建质量(PSNR),且所需的训练FLOPs少于基于Stable Diffusion潜变量训练的模型。在Kinetics-600数据集上,我们实现了1.3的FVD分数,创造了新的最先进水平。