Advances in generative models and sequence learning have greatly promoted research in dance motion generation, yet current methods still suffer from coarse semantic control and poor coherence in long sequences. In this work, we present Listen to Rhythm, Choose Movements (LRCM), a multimodal-guided diffusion framework supporting both diverse input modalities and autoregressive dance motion generation. We explore a feature decoupling paradigm for dance datasets and generalize it to the Motorica Dance dataset, separating motion capture data, audio rhythm, and professionally annotated global and local text descriptions. Our diffusion architecture integrates an audio-latent Conformer and a text-latent Cross-Conformer, and incorporates a Motion Temporal Mamba Module (MTMM) to enable smooth, long-duration autoregressive synthesis. Experimental results indicate that LRCM delivers strong performance in both functional capability and quantitative metrics, demonstrating notable potential in multimodal input scenarios and extended sequence generation. We will release the full codebase, dataset, and pretrained models publicly upon acceptance.
翻译:生成模型与序列学习的进展极大地推动了舞蹈动作生成的研究,然而现有方法仍存在语义控制粗糙和长序列连贯性不足的问题。在本工作中,我们提出"倾听节奏,选择动作"(LRCM)——一个支持多模态输入与自回归舞蹈动作生成的多模态引导扩散框架。我们探索了舞蹈数据集的特征解耦范式,并将其推广至Motorica Dance数据集,将运动捕捉数据、音频节奏以及专业标注的全局与局部文本描述进行分离。我们的扩散架构集成了音频潜在Conformer与文本潜在Cross-Conformer,并引入运动时序Mamba模块(MTMM)以实现平滑的长时程自回归合成。实验结果表明,LRCM在功能性能与量化指标上均表现出色,在多模态输入场景与长序列生成方面展现出显著潜力。论文录用后,我们将公开完整的代码库、数据集及预训练模型。