Reinforcement learning (RL) is a powerful framework for optimal decision-making and control but often lacks provable guarantees for safety-critical applications. In this paper, we introduce a novel recovery-based shielding framework that enables safe RL with a provable safety lower bound for unknown and non-linear continuous dynamical systems. The proposed approach integrates a backup policy (shield) with the RL agent, leveraging Gaussian process (GP) based uncertainty quantification to predict potential violations of safety constraints, dynamically recovering to safe trajectories only when necessary. Experience gathered by the 'shielded' agent is used to construct the GP models, with policy optimization via internal model-based sampling - enabling unrestricted exploration and sample efficient learning, without compromising safety. Empirically our approach demonstrates strong performance and strict safety-compliance on a suite of continuous control environments.
翻译:强化学习(RL)是一种用于最优决策与控制的强大框架,但在安全关键应用中通常缺乏可证明的安全保障。本文提出一种新颖的恢复式屏蔽框架,能够为未知非线性连续动态系统提供具有可证明安全下界的安全强化学习。该方法将备份策略(屏蔽器)与RL智能体相结合,利用基于高斯过程(GP)的不确定性量化来预测潜在的安全约束违反情况,仅在必要时动态恢复至安全轨迹。通过"屏蔽"智能体收集的经验数据构建GP模型,并采用基于内部模型的采样策略进行策略优化——在确保安全的前提下实现无限制探索与样本高效学习。实验表明,本方法在连续控制环境测试套件中展现出卓越的性能表现与严格的安全合规性。