Robot learning of manipulation skills is hindered by the scarcity of diverse, unbiased datasets. While curated datasets can help, challenges remain in generalizability and real-world transfer. Meanwhile, large-scale "in-the-wild" video datasets have driven progress in computer vision through self-supervised techniques. Translating this to robotics, recent works have explored learning manipulation skills by passively watching abundant videos sourced online. Showing promising results, such video-based learning paradigms provide scalable supervision while reducing dataset bias. This survey reviews foundations such as video feature representation learning techniques, object affordance understanding, 3D hand/body modeling, and large-scale robot resources, as well as emerging techniques for acquiring robot manipulation skills from uncontrolled video demonstrations. We discuss how learning only from observing large-scale human videos can enhance generalization and sample efficiency for robotic manipulation. The survey summarizes video-based learning approaches, analyses their benefits over standard datasets, survey metrics, and benchmarks, and discusses open challenges and future directions in this nascent domain at the intersection of computer vision, natural language processing, and robot learning.
翻译:机器人操作技能学习受限于多样化、无偏数据集的稀缺性。尽管精心构建的数据集能提供帮助,但在泛化能力和现实世界迁移方面仍存在挑战。与此同时,大规模"野外"视频数据通过自监督技术推动了计算机视觉领域的进步。将这一范式迁移到机器人学领域,近期研究开始探索通过被动观看从网络获取的大量视频来学习操作技能。这类基于视频的学习范式展现出良好前景,既能提供可扩展的监督信号,又能减少数据集偏差。本综述系统梳理了视频特征表示学习技术、物体可供性理解、三维手部/身体建模、大规模机器人资源等基础理论,以及从非受控视频演示中获取机器人操作技能的新兴技术。我们深入探讨了仅通过观察大规模人类视频如何提升机器人操作的泛化能力和样本效率。本文总结了基于视频的学习方法,分析了其相对于标准数据集的优势,梳理了评估指标与基准测试体系,并讨论了这一处于计算机视觉、自然语言处理与机器人学习交叉领域的新兴研究方向所面临的开放挑战与未来发展趋势。