Human demonstrations offer rich environmental diversity and scale naturally, making them an appealing alternative to robot teleoperation. While this paradigm has advanced robot-arm manipulation, its potential for the more challenging, data-hungry problem of humanoid loco-manipulation remains largely unexplored. We present EgoHumanoid, the first framework to co-train a vision-language-action policy using abundant egocentric human demonstrations together with a limited amount of robot data, enabling humanoids to perform loco-manipulation across diverse real-world environments. To bridge the embodiment gap between humans and robots, including discrepancies in physical morphology and viewpoint, we introduce a systematic alignment pipeline spanning from hardware design to data processing. A portable system for scalable human data collection is developed, and we establish practical collection protocols to improve transferability. At the core of our human-to-humanoid alignment pipeline lies two key components. The view alignment reduces visual domain discrepancies caused by camera height and perspective variation. The action alignment maps human motions into a unified, kinematically feasible action space for humanoid control. Extensive real-world experiments demonstrate that incorporating robot-free egocentric data significantly outperforms robot-only baselines by 51\%, particularly in unseen environments. Our analysis further reveals which behaviors transfer effectively and the potential for scaling human data.
翻译:人类演示提供了丰富的环境多样性且天然具备可扩展性,使其成为机器人遥操作的理想替代方案。尽管该范式已推动了机械臂操作的发展,但其在更具挑战性、数据需求更高的人形机器人移动操作问题上的潜力仍未被充分探索。本文提出EgoHumanoid,首个利用海量第一人称人类演示数据与有限机器人数据协同训练视觉-语言-动作策略的框架,使人形机器人能够在多样化的真实世界环境中执行移动操作任务。为弥合人类与机器人之间的具身鸿沟(包括物理形态与观测视角的差异),我们构建了从硬件设计到数据处理的系统性对齐流程。开发了用于可扩展人类数据采集的便携式系统,并建立了提升可迁移性的实用采集协议。人类-人形机器人对齐流程的核心包含两个关键组件:视角对齐通过调整相机高度与视角变化来减少视觉域差异;动作对齐将人类运动映射至统一且运动学可行的人形机器人控制动作空间。大量真实环境实验表明,引入无机器人参与的第一人称数据相比纯机器人基线性能提升51%,在未见环境中的表现尤为突出。进一步分析揭示了哪些行为模式具有有效可迁移性,以及人类数据规模化应用的潜力。