The recently emerging text-to-motion advances have spired numerous attempts for convenient and interactive human motion generation. Yet, existing methods are largely limited to generating body motions only without considering the rich two-hand motions, let alone handling various conditions like body dynamics or texts. To break the data bottleneck, we propose BOTH57M, a novel multi-modal dataset for two-hand motion generation. Our dataset includes accurate motion tracking for the human body and hands and provides pair-wised finger-level hand annotations and body descriptions. We further provide a strong baseline method, BOTH2Hands, for the novel task: generating vivid two-hand motions from both implicit body dynamics and explicit text prompts. We first warm up two parallel body-to-hand and text-to-hand diffusion models and then utilize the cross-attention transformer for motion blending. Extensive experiments and cross-validations demonstrate the effectiveness of our approach and dataset for generating convincing two-hand motions from the hybrid body-and-textual conditions. Our dataset and code will be disseminated to the community for future research.
翻译:近期涌现的文本到动作生成技术推动了便捷且交互式人体运动生成领域的诸多尝试。然而,现有方法大多局限于仅生成身体运动,未考虑丰富的双手运动,更无法处理诸如身体动力学或文本等多样化条件。为突破数据瓶颈,我们提出BOTH57M——一个用于双手运动生成的新型多模态数据集。该数据集包含人体与双手的精准运动追踪,并提供成对的手指级手部标注与身体描述。我们进一步提出强基线方法BOTH2Hands,以应对新任务:从隐含的身体动力学与显式文本提示中生成生动的双手运动。首先预热两个并行的身体到手部与文本到手部扩散模型,随后利用交叉注意力Transformer进行运动融合。大量实验与交叉验证表明,我们的方法与数据集在从身体与文本混合条件生成可信双手运动方面的有效性。数据集与代码将开源供社区未来研究使用。