Reinforcement learning (RL) holds great promise for enabling autonomous acquisition of complex robotic manipulation skills, but realizing this potential in real-world settings has been challenging. We present a human-in-the-loop vision-based RL system that demonstrates impressive performance on a diverse set of dexterous manipulation tasks, including dynamic manipulation, precision assembly, and dual-arm coordination. Our approach integrates demonstrations and human corrections, efficient RL algorithms, and other system-level design choices to learn policies that achieve near-perfect success rates and fast cycle times within just 1 to 2.5 hours of training. We show that our method significantly outperforms imitation learning baselines and prior RL approaches, with an average 2x improvement in success rate and 1.8x faster execution. Through extensive experiments and analysis, we provide insights into the effectiveness of our approach, demonstrating how it learns robust, adaptive policies for both reactive and predictive control strategies. Our results suggest that RL can indeed learn a wide range of complex vision-based manipulation policies directly in the real world within practical training times. We hope this work will inspire a new generation of learned robotic manipulation techniques, benefiting both industrial applications and research advancements. Videos and code are available at our project website https://hil-serl.github.io/.
翻译:强化学习(RL)在实现复杂机器人操作技能的自主习得方面具有巨大潜力,但在现实场景中实现这一潜力一直面临挑战。我们提出了一种基于视觉的人机协同强化学习系统,该系统在多种灵巧操作任务上展现出卓越性能,包括动态操作、精密装配和双臂协调。我们的方法整合了示范数据与人工校正、高效的强化学习算法以及其他系统级设计选择,能够在仅1至2.5小时的训练时间内学习到接近完美成功率与快速循环周期的策略。实验表明,我们的方法显著优于模仿学习基线及先前的强化学习方法,平均成功率提升2倍,执行速度加快1.8倍。通过大量实验与分析,我们深入阐释了该方法的有效性,展示了其如何为反应式与预测式控制策略学习到鲁棒且自适应的策略。研究结果表明,强化学习确实能够在实际可接受的训练时长内,直接在现实世界中学习到广泛的基于视觉的复杂操作策略。我们希望这项工作能启发新一代基于学习的机器人操作技术,为工业应用与科研进展带来双重助益。演示视频与代码详见项目网站 https://hil-serl.github.io/。