We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. We pretrain multiple Transfusion models up to 7B parameters from scratch on a mixture of text and image data, establishing scaling laws with respect to a variety of uni- and cross-modal benchmarks. Our experiments show that Transfusion scales significantly better than quantizing images and training a language model over discrete image tokens. By introducing modality-specific encoding and decoding layers, we can further improve the performance of Transfusion models, and even compress each image to just 16 patches. We further demonstrate that scaling our Transfusion recipe to 7B parameters and 2T multi-modal tokens produces a model that can generate images and text on a par with similar scale diffusion models and language models, reaping the benefits of both worlds.
翻译:本文提出Transfusion,一种在离散与连续数据上训练多模态模型的通用方案。Transfusion将语言建模损失函数(下一词元预测)与扩散模型相结合,通过单一Transformer架构处理混合模态序列。我们在文本与图像混合数据上从头预训练了多个参数量达70亿的Transfusion模型,并针对一系列单模态与跨模态基准任务建立了缩放规律。实验表明,相较于将图像量化为离散词元后训练语言模型的方法,Transfusion展现出显著更优的缩放性能。通过引入模态特定的编码与解码层,我们进一步提升了Transfusion模型的性能,甚至可将每幅图像压缩至仅16个图像块。我们进一步证明,将Transfusion方案扩展至70亿参数与2万亿多模态词元训练量时,所得模型在图像与文本生成任务上能达到同规模扩散模型与语言模型的同等性能,实现了两类模型优势的融合。