The ability to learn manipulation skills by watching videos of humans has the potential to unlock a new source of highly scalable data for robot learning. Here, we tackle prehensile manipulation, in which tasks involve grasping an object before performing various post-grasp motions. Human videos offer strong signals for learning the post-grasp motions, but they are less useful for learning the prerequisite grasping behaviors, especially for robots without human-like hands. A promising way forward is to use a modular policy design, leveraging a dedicated grasp generator to produce stable grasps. However, arbitrary stable grasps are often not task-compatible, hindering the robot's ability to perform the desired downstream motion. To address this challenge, we present Perceive-Simulate-Imitate (PSI), a framework for training a modular manipulation policy using human video motion data processed by paired grasp-trajectory filtering in simulation. This simulation step extends the trajectory data with grasp suitability labels, which allows for supervised learning of task-oriented grasping capabilities. We show through real-world experiments that our framework can be used to learn precise manipulation skills efficiently without any robot data, resulting in significantly more robust performance than using a grasp generator naively.
翻译:通过观看人类视频学习操作技能的能力,有望为机器人学习开辟一个高度可扩展的数据新来源。本文研究抓握式操作任务,这类任务涉及在抓取物体后执行各种抓取后动作。人类视频为学习抓取后动作提供了强有力的信号,但对于学习先决的抓取行为帮助有限,特别是对于不具备类人手的机器人而言。一种可行方案是采用模块化策略设计,利用专用的抓取生成器来产生稳定抓取。然而,任意稳定抓取往往与任务不兼容,从而阻碍机器人执行期望的后续动作。为解决这一挑战,我们提出感知-仿真-模仿(PSI)框架,该框架通过仿真中配对的抓取轨迹过滤处理人类视频动作数据,以训练模块化操作策略。该仿真步骤通过抓取适宜性标签扩展轨迹数据,从而支持面向任务的抓取能力监督学习。我们通过真实世界实验证明,该框架无需任何机器人数据即可高效学习精确操作技能,其性能鲁棒性显著优于直接使用抓取生成器的方法。