Retargeting motion between characters with different skeleton structures is a fundamental challenge in computer animation. When source and target characters have vastly different bone arrangements, maintaining the original motion's semantics and quality becomes increasingly difficult. We present PALUM, a novel approach that learns common motion representations across diverse skeleton topologies by partitioning joints into semantic body parts and applying attention mechanisms to capture spatio-temporal relationships. Our method transfers motion to target skeletons by leveraging these skeleton-agnostic representations alongside target-specific structural information. To ensure robust learning and preserve motion fidelity, we introduce a cycle consistency mechanism that maintains semantic coherence throughout the retargeting process. Extensive experiments demonstrate superior performance in handling diverse skeletal structures while maintaining motion realism and semantic fidelity, even when generalizing to previously unseen skeleton-motion combinations. We will make our implementation publicly available to support future research.
翻译:在不同骨架结构的角色之间进行运动重定向是计算机动画中的一个基础性挑战。当源角色与目标角色的骨骼布局差异巨大时,保持原始运动的语义与质量变得尤为困难。本文提出PALUM,一种新颖的方法,通过将关节划分为语义身体部位并应用注意力机制来捕捉时空关系,从而学习跨不同骨架拓扑的通用运动表示。我们的方法利用这些与骨架无关的表示以及目标特定的结构信息,将运动迁移到目标骨架上。为了确保学习的鲁棒性并保持运动保真度,我们引入了一种循环一致性机制,该机制在整个重定向过程中维持语义连贯性。大量实验表明,该方法在处理多样化骨骼结构方面表现出优越性能,同时保持了运动的真实感与语义保真度,即使在泛化到先前未见过的骨架-运动组合时也是如此。我们将公开实现代码以支持未来研究。