Audio-driven co-speech human gesture generation has made remarkable advancements recently. However, most previous works only focus on single person audio-driven gesture generation. We aim at solving the problem of conversational co-speech gesture generation that considers multiple participants in a conversation, which is a novel and challenging task due to the difficulty of simultaneously incorporating semantic information and other relevant features from both the primary speaker and the interlocutor. To this end, we propose CoDiffuseGesture, a diffusion model-based approach for speech-driven interaction gesture generation via modeling bilateral conversational intention, emotion, and semantic context. Our method synthesizes appropriate interactive, speech-matched, high-quality gestures for conversational motions through the intention perception module and emotion reasoning module at the sentence level by a pretrained language model. Experimental results demonstrate the promising performance of the proposed method.
翻译:音频驱动的协同语音手势生成近期取得了显著进展。然而,以往的研究大多仅关注单人音频驱动的手势生成。我们旨在解决对话场景中多人参与的协同语音手势生成问题,这是一个新颖且具有挑战性的任务,难处在于需要同时整合主要说话者与对话者的语义信息及其他相关特征。为此,我们提出CoDiffuseGesture——一种基于扩散模型的方法,通过建模双边对话意图、情感与语义上下文,实现语音驱动的交互式手势生成。该方法利用预训练语言模型,在句子级别通过意图感知模块与情感推理模块,合成符合对话交互、匹配语音的高质量手势。实验结果表明,所提方法具有优越的性能。