We present MoST (Mixture of Speech and Text), a novel multimodal large language model that seamlessly integrates speech and text processing through our proposed Modality-Aware Mixture of Experts (MAMoE) architecture. While current multimodal models typically process diverse modality representations with identical parameters, disregarding their inherent representational differences, we introduce specialized routing pathways that direct tokens to modality-appropriate experts based on input type. MAMoE simultaneously enhances modality-specific learning and cross-modal understanding through two complementary components: modality-specific expert groups that capture domain-specific patterns and shared experts that facilitate information transfer between modalities. Building on this architecture, we develop an efficient transformation pipeline that adapts the pretrained MoE language model through strategic post-training on ASR and TTS datasets, followed by fine-tuning with a carefully curated speech-text instruction dataset. A key feature of this pipeline is that it relies exclusively on fully accessible, open-source datasets to achieve strong performance and data efficiency. Comprehensive evaluations across ASR, TTS, audio language modeling, and spoken question answering benchmarks show that MoST consistently outperforms existing models of comparable parameter counts. Our ablation studies confirm that the modality-specific routing mechanism and shared experts design significantly contribute to performance gains across all tested domains. To our knowledge, MoST represents the first fully open-source speech-text LLM built on a Mixture of Experts architecture. \footnote{We release MoST model, training code, inference code, and training data at https://github.com/NUS-HPC-AI-Lab/MoST
翻译:本文提出MoST(语音与文本混合模型),这是一种通过我们提出的模态感知专家混合架构无缝整合语音与文本处理的新型多模态大语言模型。当前多模态模型通常使用相同参数处理不同模态表征,忽略了其固有的表征差异,为此我们引入基于输入类型的专用路由路径,将词元定向至模态适配的专家。MAMoE通过两个互补组件同时增强模态特异性学习与跨模态理解:捕获领域特定模式的模态专用专家组,以及促进模态间信息传递的共享专家。基于此架构,我们开发了高效的转换流程:首先通过对ASR和TTS数据集进行策略性后训练,随后使用精心构建的语音-文本指令数据集进行微调,从而适配预训练的MoE语言模型。该流程的关键特征在于完全依赖可公开获取的开源数据集即可实现优异性能与数据效率。在ASR、TTS、音频语言建模及口语问答基准上的综合评估表明,MoST在参数量相当的模型中持续优于现有模型。消融研究证实,模态专用路由机制与共享专家设计对所有测试领域的性能提升均有显著贡献。据我们所知,MoST是首个基于专家混合架构构建的完全开源语音-文本大语言模型。\footnote{我们在https://github.com/NUS-HPC-AI-Lab/MoST发布MoST模型、训练代码、推理代码及训练数据}