Robots can use Visual Imitation Learning (VIL) to learn everyday tasks from video demonstrations. However, translating visual observations into actionable robot policies is challenging due to the high-dimensional nature of video data. This challenge is further exacerbated by the morphological differences between humans and robots, especially when the video demonstrations feature humans performing tasks. To address these problems we introduce Visual Imitation lEarning with Waypoints (VIEW), an algorithm that significantly enhances the sample efficiency of human-to-robot VIL. VIEW achieves this efficiency using a multi-pronged approach: extracting a condensed prior trajectory that captures the demonstrator's intent, employing an agent-agnostic reward function for feedback on the robot's actions, and utilizing an exploration algorithm that efficiently samples around waypoints in the extracted trajectory. VIEW also segments the human trajectory into grasp and task phases to further accelerate learning efficiency. Through comprehensive simulations and real-world experiments, VIEW demonstrates improved performance compared to current state-of-the-art VIL methods. VIEW enables robots to learn a diverse range of manipulation tasks involving multiple objects from arbitrarily long video demonstrations. Additionally, it can learn standard manipulation tasks such as pushing or moving objects from a single video demonstration in under 30 minutes, with fewer than 20 real-world rollouts. Code and videos here: https://collab.me.vt.edu/view/
翻译:机器人可以通过视觉模仿学习(VIL)从视频演示中学习日常任务。然而,由于视频数据的高维特性,将视觉观察转化为可执行的机器人策略具有挑战性。当视频演示呈现人类执行任务时,人与机器人之间的形态差异进一步加剧了这一挑战。为解决这些问题,我们提出了基于路径点的视觉模仿学习(VIEW),该算法显著提升了人机视觉模仿学习的样本效率。VIEW通过多管齐下的方法实现这一效率:提取捕捉演示者意图的紧凑先验轨迹;采用与智能体无关的奖励函数为机器人动作提供反馈;并利用一种探索算法,在提取轨迹的路径点周围高效采样。VIEW还将人类轨迹分割为抓取和任务阶段,以进一步提升学习效率。通过全面的仿真和真实世界实验,VIEW展现出优于当前最先进视觉模仿学习方法的性能。VIEW使机器人能够从任意长度的视频演示中学习涉及多个物体的多样化操作任务。此外,它能在30分钟内、通过少于20次真实世界试错,从单次视频演示中学会推或移动物体等标准操作任务。代码与视频见:https://collab.me.vt.edu/view/