We introduce the Cross Human Motion Diffusion Model (CrossDiff), a novel approach for generating high-quality human motion based on textual descriptions. Our method integrates 3D and 2D information using a shared transformer network within the training of the diffusion model, unifying motion noise into a single feature space. This enables cross-decoding of features into both 3D and 2D motion representations, regardless of their original dimension. The primary advantage of CrossDiff is its cross-diffusion mechanism, which allows the model to reverse either 2D or 3D noise into clean motion during training. This capability leverages the complementary information in both motion representations, capturing intricate human movement details often missed by models relying solely on 3D information. Consequently, CrossDiff effectively combines the strengths of both representations to generate more realistic motion sequences. In our experiments, our model demonstrates competitive state-of-the-art performance on text-to-motion benchmarks. Moreover, our method consistently provides enhanced motion generation quality, capturing complex full-body movement intricacies. Additionally, with a pretrained model,our approach accommodates using in the wild 2D motion data without 3D motion ground truth during training to generate 3D motion, highlighting its potential for broader applications and efficient use of available data resources. Project page: https://wonderno.github.io/CrossDiff-webpage/.
翻译:本文提出跨模态人体运动扩散模型(CrossDiff),一种基于文本描述生成高质量人体运动的新方法。我们的方法在扩散模型训练中,通过共享的Transformer网络整合3D与2D信息,将运动噪声统一至单一特征空间。这使得特征能够跨维度解码为3D或2D运动表征,而不受原始维度限制。CrossDiff的核心优势在于其跨扩散机制,该机制允许模型在训练过程中将2D或3D噪声反转为纯净运动。此能力利用两种运动表征中的互补信息,捕捉了仅依赖3D信息的模型常忽略的精细人体运动细节。因此,CrossDiff能有效结合两种表征的优势,生成更逼真的运动序列。实验表明,我们的模型在文本到运动基准测试中展现出具有竞争力的最先进性能。此外,本方法能持续提供增强的运动生成质量,捕捉复杂的全身运动细节。值得注意的是,通过预训练模型,我们的方法可在训练阶段使用无3D运动真值的野外2D运动数据来生成3D运动,这凸显了其更广泛的应用潜力及对现有数据资源的高效利用能力。项目页面:https://wonderno.github.io/CrossDiff-webpage/。