Learning dexterous and agile policy for humanoid and dexterous hand control requires large-scale demonstrations, but collecting robot-specific data is prohibitively expensive. In contrast, abundant human motion data is readily available from motion capture, videos, and virtual reality, which could help address the data scarcity problem. However, due to the embodiment gap and missing dynamic information like force and torque, these demonstrations cannot be directly executed on robots. To bridge this gap, we propose Scalable Physics-Informed DExterous Retargeting (SPIDER), a physics-based retargeting framework to transform and augment kinematic-only human demonstrations to dynamically feasible robot trajectories at scale. Our key insight is that human demonstrations should provide global task structure and objective, while large-scale physics-based sampling with curriculum-style virtual contact guidance should refine trajectories to ensure dynamical feasibility and correct contact sequences. SPIDER scales across diverse 9 humanoid/dexterous hand embodiments and 6 datasets, improving success rates by 18% compared to standard sampling, while being 10X faster than reinforcement learning (RL) baselines, and enabling the generation of a 2.4M frames dynamic-feasible robot dataset for policy learning. As a universal physics-based retargeting method, SPIDER can work with diverse quality data and generate diverse and high-quality data to enable efficient policy learning with methods like RL.
翻译:学习人形机器人与灵巧手控制的灵巧敏捷策略需要大规模演示数据,但采集机器人专用数据的成本极高。相比之下,人类运动数据可通过动作捕捉、视频和虚拟现实等技术大量获取,这有助于缓解数据稀缺问题。然而,由于实体差异及缺乏力/力矩等动态信息,这些演示无法直接在机器人上执行。为弥合此鸿沟,我们提出可扩展的物理信息灵巧重定向框架(SPIDER),该基于物理的重定向框架能够将仅含运动学信息的人类演示大规模转化为动态可行的机器人轨迹。我们的核心见解是:人类演示应提供全局任务结构与目标,而基于课程式虚拟接触引导的大规模物理采样可优化轨迹,确保动态可行性及正确的接触序列。SPIDER可扩展应用于9种人形/灵巧手机体与6个数据集,相比标准采样方法成功率提升18%,计算速度比强化学习基线快10倍,并能生成包含240万帧动态可行机器人轨迹的数据集用于策略学习。作为通用的基于物理的重定向方法,SPIDER可兼容多样质量的数据源,生成多样化高质量数据,从而支持通过强化学习等方法实现高效策略学习。