The vast applications of deep generative models are anchored in three core capabilities -- generating new instances, reconstructing inputs, and learning compact representations -- across various data types, such as discrete text/protein sequences and continuous images. Existing model families, like Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), autoregressive models, and diffusion models, generally excel in specific capabilities and data types but fall short in others. We introduce generalized diffusion with learnable encoder-decoder (DiLED), that seamlessly integrates the core capabilities for broad applicability and enhanced performance. DiLED generalizes the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding. Crucially, DiLED is compatible with the well-established diffusion model objective and training recipes, allowing effective learning of the encoder-decoder parameters jointly with diffusion. By choosing appropriate encoder/decoder (e.g., large language models), DiLED naturally applies to different data types. Extensive experiments on text, proteins, and images demonstrate DiLED's flexibility to handle diverse data and tasks and its strong improvement over various existing models.
翻译:深度生成模型的广泛应用根植于三大核心能力——生成新实例、重建输入以及学习紧凑表示——这些能力适用于离散文本/蛋白质序列与连续图像等多种数据类型。现有模型系列(如变分自编码器(VAE)、生成对抗网络(GAN)、自回归模型和扩散模型)通常在某些能力或数据类型上表现优异,但在其他方面存在不足。我们提出基于可学习编码器-解码器的广义扩散模型(DiLED),该模型无缝整合了上述核心能力,从而获得广泛适用性与更优性能。DiLED通过引入参数化编码-解码过程,将标准扩散模型中的高斯加噪-去噪机制进行泛化。关键在于,DiLED与成熟的扩散模型目标函数及训练策略兼容,使得编码器-解码器参数可随扩散过程联合学习。通过选择恰当的编码器/解码器(例如大型语言模型),DiLED自然适用于不同数据类型。在文本、蛋白质及图像上的大量实验表明,DiLED在应对多样数据与任务时具有灵活性,且相较于各类现有模型表现出显著性能提升。