Learning diverse manipulation skills for real-world robots is severely bottlenecked by the reliance on costly and hard-to-scale teleoperated demonstrations. While human videos offer a scalable alternative, effectively transferring manipulation knowledge is fundamentally hindered by the significant morphological gap between human and robotic embodiments. To address this challenge and facilitate skill transfer from human to robot, we introduce Traj2Action, a novel framework that bridges this embodiment gap by using the 3D trajectory of the operational endpoint as a unified intermediate representation, and then transfers the manipulation knowledge embedded in this trajectory to the robot's actions. Our policy first learns to generate a coarse trajectory, which forms a high-level motion plan by leveraging both human and robot data. This plan then conditions the synthesis of precise, robot-specific actions (e.g., orientation and gripper state) within a co-denoising framework. Our work centers on two core objectives: first, the systematic verification of the Traj2Action framework's effectiveness-spanning architectural design, cross-task generalization, and data efficiency and second, the revelation of key laws that govern robot policy learning during the integration of human hand demonstration data. This research focus enables us to provide a scalable paradigm tailored to address human-to-robot skill transfer across morphological gaps. Extensive real-world experiments on a Franka robot demonstrate that Traj2Action boosts the performance by up to 27% and 22.25% over $π_0$ baseline on short- and long-horizon real-world tasks, and achieves significant gains as human data scales in robot policy learning.
翻译:为真实世界机器人学习多样化的操作技能,严重依赖于成本高昂且难以扩展的遥操作演示,这构成了一个主要瓶颈。虽然人类视频提供了一种可扩展的替代方案,但人类与机器人形态之间的显著差异从根本上阻碍了操作知识的有效迁移。为应对这一挑战并促进从人到机器人的技能迁移,我们提出了Traj2Action,这是一个新颖的框架。该框架通过使用操作末端的三维轨迹作为统一的中间表示来弥合这种形态差异,然后将嵌入在该轨迹中的操作知识迁移到机器人的动作中。我们的策略首先学习生成一个粗略的轨迹,该轨迹通过利用人类和机器人数据形成一个高层运动规划。然后,该规划在一个协同去噪框架内,指导合成精确的、机器人特定的动作(例如,朝向和夹爪状态)。我们的工作围绕两个核心目标展开:第一,系统验证Traj2Action框架的有效性——涵盖架构设计、跨任务泛化能力和数据效率;第二,揭示在整合人手演示数据过程中,支配机器人策略学习的关键规律。这一研究重点使我们能够提供一个可扩展的范式,专门用于解决跨越形态差异的人机技能迁移问题。在Franka机器人上进行的大量真实世界实验表明,在短期和长期的真实世界任务上,Traj2Action相较于$π_0$基线将性能分别提升了高达27%和22.25%,并且在机器人策略学习中,随着人类数据规模的增加,取得了显著的性能增益。