Enabling humanoid robots to perform agile and adaptive interactive tasks has long been a core challenge in robotics. Current approaches are bottlenecked by either the scarcity of realistic interaction data or the need for meticulous, task-specific reward engineering, which limits their scalability. To narrow this gap, we present HumanX, a full-stack framework that compiles human video into generalizable, real-world interaction skills for humanoids, without task-specific rewards. HumanX integrates two co-designed components: XGen, a data generation pipeline that synthesizes diverse and physically plausible robot interaction data from video while supporting scalable data augmentation; and XMimic, a unified imitation learning framework that learns generalizable interaction skills. Evaluated across five distinct domains--basketball, football, badminton, cargo pickup, and reactive fighting--HumanX successfully acquires 10 different skills and transfers them zero-shot to a physical Unitree G1 humanoid. The learned capabilities include complex maneuvers such as pump-fake turnaround fadeaway jumpshots without any external perception, as well as interactive tasks like sustained human-robot passing sequences over 10 consecutive cycles--learned from a single video demonstration. Our experiments show that HumanX achieves over 8 times higher generalization success than prior methods, demonstrating a scalable and task-agnostic pathway for learning versatile, real-world robot interactive skills.
翻译:使人形机器人能够执行敏捷且自适应的交互任务长期以来一直是机器人学中的核心挑战。现有方法受限于现实交互数据的稀缺性或需要精心设计、任务特定的奖励工程,这限制了其可扩展性。为缩小这一差距,我们提出了HumanX,这是一个全栈框架,能够将人类视频编译为适用于人形机器人的、可泛化的现实世界交互技能,而无需任务特定奖励。HumanX集成了两个协同设计的组件:XGen,一个数据生成流水线,能够从视频中合成多样且物理合理的机器人交互数据,同时支持可扩展的数据增强;以及XMimic,一个统一的模仿学习框架,用于学习可泛化的交互技能。在五个不同领域(篮球、足球、羽毛球、货物拾取和反应性格斗)的评估中,HumanX成功习得了10种不同的技能,并将其零样本迁移到物理的Unitree G1人形机器人上。习得的能力包括复杂的动作,如无需外部感知的假动作转身后仰跳投,以及交互式任务,如从单次视频演示中学习到超过10个连续循环的持续人-机器人传球序列。我们的实验表明,HumanX实现了比先前方法高出8倍以上的泛化成功率,展示了一条可扩展且任务无关的途径,用于学习多功能的现实世界机器人交互技能。