Retargeting human motion to heterogeneous robots is a fundamental challenge in robotics, primarily due to the severe kinematic and dynamic discrepancies between varying embodiments. Existing solutions typically resort to training embodiment-specific models, which scales poorly and fails to exploit shared motion semantics. To address this, we present AdaMorph, a unified neural retargeting framework that enables a single model to adapt human motion to diverse robot morphologies. Our approach treats retargeting as a conditional generation task. We map human motion into a morphology-agnostic latent intent space and utilize a dual-purpose prompting mechanism to condition the generation. Instead of simple input concatenation, we leverage Adaptive Layer Normalization (AdaLN) to dynamically modulate the decoder's feature space based on embodiment constraints. Furthermore, we enforce physical plausibility through a curriculum-based training objective that ensures orientation and trajectory consistency via integration. Experimental results on 12 distinct humanoid robots demonstrate that AdaMorph effectively unifies control across heterogeneous topologies, exhibiting strong zero-shot generalization to unseen complex motions while preserving the dynamic essence of the source behaviors.
翻译:将人体运动重定向到异构机器人是机器人学中的一个基本挑战,这主要源于不同具身形态之间严重的运动学和动力学差异。现有解决方案通常依赖于训练针对特定具身形态的模型,这种方法扩展性差,且未能充分利用共享的运动语义。为此,我们提出了AdaMorph,一个统一的神经重定向框架,使得单个模型能够将人体运动适配到多样化的机器人形态。我们的方法将重定向视为条件生成任务。我们将人体运动映射到一个与形态无关的潜在意图空间,并利用一种双用途提示机制来条件化生成过程。我们并非采用简单的输入拼接,而是利用自适应层归一化(AdaLN)来根据具身约束动态调制解码器的特征空间。此外,我们通过一种基于课程学习的训练目标来强制物理合理性,该目标通过积分确保方向与轨迹的一致性。在12个不同人形机器人上的实验结果表明,AdaMorph能有效统一异构拓扑结构下的控制,在保持源行为动态本质的同时,对未见过的复杂运动展现出强大的零样本泛化能力。