Recently, human motion analysis has experienced great improvement due to inspiring generative models such as the denoising diffusion model and large language model. While the existing approaches mainly focus on generating motions with textual descriptions and overlook the reciprocal task. In this paper, we present~\textbf{MoTe}, a unified multi-modal model that could handle diverse tasks by learning the marginal, conditional, and joint distributions of motion and text simultaneously. MoTe enables us to handle the paired text-motion generation, motion captioning, and text-driven motion generation by simply modifying the input context. Specifically, MoTe is composed of three components: Motion Encoder-Decoder (MED), Text Encoder-Decoder (TED), and Moti-on-Text Diffusion Model (MTDM). In particular, MED and TED are trained for extracting latent embeddings, and subsequently reconstructing the motion sequences and textual descriptions from the extracted embeddings, respectively. MTDM, on the other hand, performs an iterative denoising process on the input context to handle diverse tasks. Experimental results on the benchmark datasets demonstrate the superior performance of our proposed method on text-to-motion generation and competitive performance on motion captioning.
翻译:近年来,得益于去噪扩散模型和大语言模型等启发性生成模型的发展,人体运动分析取得了显著进展。然而,现有方法主要集中于根据文本描述生成运动,而忽视了其反向任务。本文提出~\textbf{MoTe},一种统一的多模态模型,能够通过同时学习运动与文本的边缘分布、条件分布和联合分布来处理多样化任务。MoTe 使我们能够仅通过修改输入上下文即可处理成对的文本-运动生成、运动描述生成以及文本驱动的运动生成。具体而言,MoTe 由三个组件构成:运动编码器-解码器(MED)、文本编码器-解码器(TED)以及运动-文本扩散模型(MTDM)。其中,MED 和 TED 分别用于提取潜在嵌入,并随后从提取的嵌入中重建运动序列和文本描述。而 MTDM 则对输入上下文执行迭代去噪过程以处理多样化任务。在基准数据集上的实验结果表明,我们提出的方法在文本到运动生成任务上具有优越性能,并在运动描述生成任务上表现出竞争力。