The development of large language models (LLMs) has expanded to multi-modal systems capable of processing text, images, and speech within a unified framework. Training these models demands significantly larger datasets and computational resources compared to text-only LLMs. To address the scaling challenges, we introduce Mixture-of-Transformers (MoT), a sparse multi-modal transformer architecture that significantly reduces pretraining computational costs. MoT decouples non-embedding parameters of the model by modality -- including feed-forward networks, attention matrices, and layer normalization -- enabling modality-specific processing with global self-attention over the full input sequence. We evaluate MoT across multiple settings and model scales. In the Chameleon 7B setting (autoregressive text-and-image generation), MoT matches the dense baseline's performance using only 55.8\% of the FLOPs. When extended to include speech, MoT reaches speech performance comparable to the dense baseline with only 37.2\% of the FLOPs. In the Transfusion setting, where text and image are trained with different objectives, a 7B MoT model matches the image modality performance of the dense baseline with one third of the FLOPs, and a 760M MoT model outperforms a 1.4B dense baseline across key image generation metrics. System profiling further highlights MoT's practical benefits, achieving dense baseline image quality in 47.2\% of the wall-clock time and text quality in 75.6\% of the wall-clock time (measured on AWS p4de.24xlarge instances with NVIDIA A100 GPUs).
翻译:大型语言模型(LLM)的发展已扩展至能够在统一框架中处理文本、图像和语音的多模态系统。与纯文本LLM相比,训练这些模型需要显著更大的数据集和计算资源。为应对扩展性挑战,我们提出了混合Transformer(MoT),一种稀疏多模态Transformer架构,可显著降低预训练计算成本。MoT按模态解耦模型的非嵌入参数——包括前馈网络、注意力矩阵和层归一化——在完整输入序列的全局自注意力机制下实现模态特异性处理。我们在多种设置和模型规模下评估MoT。在Chameleon 7B设置(自回归文本-图像生成)中,MoT仅使用55.8%的FLOPs即达到稠密基线的性能。当扩展至包含语音模态时,MoT仅用37.2%的FLOPs即可获得与稠密基线相当的语音性能。在Transfusion设置(文本与图像采用不同训练目标)中,7B参数的MoT模型仅用三分之一FLOPs即可匹配稠密基线的图像模态性能,而760M参数的MoT模型在关键图像生成指标上超越1.4B参数的稠密基线。系统性能分析进一步突显MoT的实际优势:在AWS p4de.24xlarge实例(配备NVIDIA A100 GPU)的测试中,达到稠密基线图像质量仅需47.2%的墙钟时间,达到文本质量仅需75.6%的墙钟时间。