Multimodal generative models require a unified approach to handle both discrete data (e.g., text and code) and continuous data (e.g., image, audio, video). In this work, we propose Latent Language Modeling (LatentLM), which seamlessly integrates continuous and discrete data using causal Transformers. Specifically, we employ a variational autoencoder (VAE) to represent continuous data as latent vectors and introduce next-token diffusion for autoregressive generation of these vectors. Additionally, we develop $\sigma$-VAE to address the challenges of variance collapse, which is crucial for autoregressive modeling. Extensive experiments demonstrate the effectiveness of LatentLM across various modalities. In image generation, LatentLM surpasses Diffusion Transformers in both performance and scalability. When integrated into multimodal large language models, LatentLM provides a general-purpose interface that unifies multimodal generation and understanding. Experimental results show that LatentLM achieves favorable performance compared to Transfusion and vector quantized models in the setting of scaling up training tokens. In text-to-speech synthesis, LatentLM outperforms the state-of-the-art VALL-E 2 model in speaker similarity and robustness, while requiring 10x fewer decoding steps. The results establish LatentLM as a highly effective and scalable approach to advance large multimodal models.
翻译:多模态生成模型需要一种统一的方法来处理离散数据(如文本和代码)和连续数据(如图像、音频、视频)。在这项工作中,我们提出了潜在语言建模(LatentLM),它利用因果Transformer无缝整合连续和离散数据。具体而言,我们采用变分自编码器(VAE)将连续数据表示为潜在向量,并引入下一令牌扩散用于这些向量的自回归生成。此外,我们开发了$\sigma$-VAE以解决方差坍缩的挑战,这对于自回归建模至关重要。大量实验证明了LatentLM在各种模态上的有效性。在图像生成方面,LatentLM在性能和可扩展性上均超越了扩散Transformer。当集成到多模态大语言模型中时,LatentLM提供了一个通用接口,统一了多模态生成与理解。实验结果表明,在扩大训练令牌数量的设定下,与Transfusion和向量量化模型相比,LatentLM取得了更优的性能。在文本到语音合成任务中,LatentLM在说话人相似性和鲁棒性上超越了最先进的VALL-E 2模型,同时所需的解码步骤减少了10倍。这些结果确立了LatentLM作为一种高效且可扩展的方法,能够推动大型多模态模型的发展。