We introduce UniMuMo, a unified multimodal model capable of taking arbitrary text, music, and motion data as input conditions to generate outputs across all three modalities. To address the lack of time-synchronized data, we align unpaired music and motion data based on rhythmic patterns to leverage existing large-scale music-only and motion-only datasets. By converting music, motion, and text into token-based representation, our model bridges these modalities through a unified encoder-decoder transformer architecture. To support multiple generation tasks within a single framework, we introduce several architectural improvements. We propose encoding motion with a music codebook, mapping motion into the same feature space as music. We introduce a music-motion parallel generation scheme that unifies all music and motion generation tasks into a single transformer decoder architecture with a single training task of music-motion joint generation. Moreover, the model is designed by fine-tuning existing pre-trained single-modality models, significantly reducing computational demands. Extensive experiments demonstrate that UniMuMo achieves competitive results on all unidirectional generation benchmarks across music, motion, and text modalities. Quantitative results are available in the \href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.
翻译:本文提出UniMuMo,一种统一的多模态模型,能够以任意文本、音乐和动作数据作为输入条件,生成涵盖所有三种模态的输出。针对时间同步数据的缺乏,我们基于节奏模式对齐非配对的音乐与动作数据,从而利用现有的大规模纯音乐和纯动作数据集。通过将音乐、动作和文本转换为基于令牌的表示,我们的模型通过统一的编码器-解码器Transformer架构桥接这些模态。为支持单一框架内的多种生成任务,我们引入了若干架构改进。我们提出使用音乐码本编码动作,将动作映射至与音乐相同的特征空间。我们引入了音乐-动作并行生成方案,将所有音乐与动作生成任务统一至具有音乐-动作联合生成单一训练任务的Transformer解码器架构中。此外,该模型通过微调现有预训练单模态模型进行设计,显著降低了计算需求。大量实验表明,UniMuMo在音乐、动作和文本模态的所有单向生成基准测试中均取得了具有竞争力的结果。定量结果可于\href{https://hanyangclarence.github.io/unimumo_demo/}{项目页面}查看。