We present StyleMotif, a novel Stylized Motion Latent Diffusion model, generating motion conditioned on both content and style from multiple modalities. Unlike existing approaches that either focus on generating diverse motion content or transferring style from sequences, StyleMotif seamlessly synthesizes motion across a wide range of content while incorporating stylistic cues from multi-modal inputs, including motion, text, image, video, and audio. To achieve this, we introduce a style-content cross fusion mechanism and align a style encoder with a pre-trained multi-modal model, ensuring that the generated motion accurately captures the reference style while preserving realism. Extensive experiments demonstrate that our framework surpasses existing methods in stylized motion generation and exhibits emergent capabilities for multi-modal motion stylization, enabling more nuanced motion synthesis. Source code and pre-trained models will be released upon acceptance. Project Page: https://stylemotif.github.io
翻译:本文提出StyleMotif,一种新颖的风格化运动潜在扩散模型,能够根据多模态输入的内容和风格条件生成运动。与现有方法仅关注生成多样化的运动内容或从序列中迁移风格不同,StyleMotif能够无缝地合成涵盖广泛内容的运动,同时融合来自多模态输入(包括运动、文本、图像、视频和音频)的风格线索。为实现这一目标,我们引入了风格-内容交叉融合机制,并将风格编码器与预训练的多模态模型对齐,确保生成的运动在保持真实感的同时准确捕捉参考风格。大量实验表明,我们的框架在风格化运动生成方面超越了现有方法,并展现出多模态运动风格化的涌现能力,实现了更精细的运动合成。源代码与预训练模型将在论文录用后发布。项目页面:https://stylemotif.github.io