Text-driven human motion generation, as one of the vital tasks in computer-aided content creation, has recently attracted increasing attention. While pioneering research has largely focused on improving numerical performance metrics on given datasets, practical applications reveal a common challenge: existing methods often overfit specific motion expressions in the training data, hindering their ability to generalize to novel descriptions like unseen combinations of motions. This limitation restricts their broader applicability. We argue that the aforementioned problem primarily arises from the scarcity of available motion-text pairs, given the many-to-many nature of text-driven motion generation. To tackle this problem, we formulate text-to-motion generation as a Markov decision process and present \textbf{InstructMotion}, which incorporate the trail and error paradigm in reinforcement learning for generalizable human motion generation. Leveraging contrastive pre-trained text and motion encoders, we delve into optimizing reward design to enable InstructMotion to operate effectively on both paired data, enhancing global semantic level text-motion alignment, and synthetic text-only data, facilitating better generalization to novel prompts without the need for ground-truth motion supervision. Extensive experiments on prevalent benchmarks and also our synthesized unpaired dataset demonstrate that the proposed InstructMotion achieves outstanding performance both quantitatively and qualitatively.
翻译:文本驱动的人体运动生成作为计算机辅助内容创作的重要任务之一,近期受到越来越多的关注。尽管开创性研究主要集中于提升给定数据集上的数值性能指标,但实际应用揭示了一个普遍挑战:现有方法往往过度拟合训练数据中的特定运动表达,阻碍了其泛化至新颖描述(如未见过的运动组合)的能力。这一局限限制了其更广泛的应用范围。我们认为,上述问题主要源于可用运动-文本对的稀缺性,而这源于文本驱动运动生成固有的多对多特性。为解决此问题,我们将文本到运动生成建模为马尔可夫决策过程,并提出 \textbf{InstructMotion},该方法将强化学习中的试错范式融入可泛化人体运动生成。通过利用对比预训练文本与运动编码器,我们深入优化奖励设计,使InstructMotion能够在配对数据(增强全局语义层面的文本-运动对齐)和合成的纯文本数据(促进对新颖提示的更好泛化,无需真实运动监督)上均有效运作。在主流基准测试及我们合成的非配对数据集上进行的大量实验表明,所提出的InstructMotion在定量和定性评估中均取得了卓越性能。