The body movements accompanying speech aid speakers in expressing their ideas. Co-speech motion generation is one of the important approaches for synthesizing realistic avatars. Due to the intricate correspondence between speech and motion, generating realistic and diverse motion is a challenging task. In this paper, we propose MMoFusion, a Multi-modal co-speech Motion generation framework based on the diffusion model to ensure both the authenticity and diversity of generated motion. We propose a progressive fusion strategy to enhance the interaction of inter-modal and intra-modal, efficiently integrating multi-modal information. Specifically, we employ a masked style matrix based on emotion and identity information to control the generation of different motion styles. Temporal modeling of speech and motion is partitioned into style-guided specific feature encoding and shared feature encoding, aiming to learn both inter-modal and intra-modal features. Besides, we propose a geometric loss to enforce the joints' velocity and acceleration coherence among frames. Our framework generates vivid, diverse, and style-controllable motion of arbitrary length through inputting speech and editing identity and emotion. Extensive experiments demonstrate that our method outperforms current co-speech motion generation methods including upper body and challenging full body.
翻译:伴随言语的身体动作帮助说话者表达思想。共语动作生成是合成逼真虚拟角色的重要方法之一。由于言语与动作之间复杂的对应关系,生成逼真且多样化的动作是一项具有挑战性的任务。本文提出MMoFusion——一种基于扩散模型的多模态共语动作生成框架,以确保生成动作的真实性和多样性。我们提出渐进式融合策略,增强模态间与模态内的交互,高效整合多模态信息。具体而言,我们利用基于情感和身份信息的掩码风格矩阵来控制不同动作风格的生成。将言语和动作的时间建模划分为风格引导的特定特征编码和共享特征编码,旨在学习模态间与模态内的特征。此外,我们提出几何损失函数,以强化帧间关节速度与加速度的一致性。通过输入言语并编辑身份与情感,我们的框架能生成生动、多样且风格可控的任意长度动作。大量实验表明,我们的方法在共语动作生成任务(包括上半身和具有挑战性的全身动作)上优于现有方法。