Post-training algorithms based on deep reinforcement learning can push the limits of robotic models for specific objectives, such as generalizability, accuracy, and robustness. However, Intervention-requiring Failures (IR Failures) (e.g., a robot spilling water or breaking fragile glass) during real-world exploration happen inevitably, hindering the practical deployment of such a paradigm. To tackle this, we introduce Failure-Aware Offline-to-Online Reinforcement Learning (FARL), a new paradigm minimizing failures during real-world reinforcement learning. We create FailureBench, a benchmark that incorporates common failure scenarios requiring human intervention, and propose an algorithm that integrates a world-model-based safety critic and a recovery policy trained offline to prevent failures during online exploration. Extensive simulation and real-world experiments demonstrate the effectiveness of FARL in significantly reducing IR Failures while improving performance and generalization during online reinforcement learning post-training. FARL reduces IR Failures by 73.1% while elevating performance by 11.3% on average during real-world RL post-training. Videos and code are available at https://failure-aware-rl.github.io.
翻译:基于深度强化学习的后训练算法能够针对特定目标(如泛化性、准确性和鲁棒性)突破机器人模型的性能极限。然而,在现实世界探索过程中,需要人工干预的故障(IR故障)(例如机器人打翻水或打碎易碎玻璃)不可避免地发生,阻碍了该范式的实际部署。为解决这一问题,我们提出了故障感知离线到在线强化学习(FARL),这是一种在现实世界强化学习过程中最大限度减少故障的新范式。我们创建了FailureBench基准测试集,其中包含了常见且需要人工干预的故障场景,并提出了一种算法,该算法集成了基于世界模型的安全评判器与离线训练的恢复策略,以防止在线探索期间的故障发生。大量的仿真和现实世界实验证明了FARL在显著减少IR故障的同时,能在在线强化学习后训练阶段提升性能和泛化能力的有效性。在现实世界强化学习后训练中,FARL平均减少了73.1%的IR故障,同时将性能提升了11.3%。视频和代码可在 https://failure-aware-rl.github.io 获取。