Constrained Reinforcement Learning (CRL) aims to optimize decision-making policies under constraint conditions, making it highly applicable to safety-critical domains such as autonomous driving, robotics, and power grid management. However, existing robust CRL approaches predominantly focus on single-step perturbations and temporally independent adversarial models, lacking explicit modeling of robustness against temporally coupled perturbations. To tackle these challenges, we propose TCRL, a novel temporal-coupled adversarial training framework for robust constrained reinforcement learning (TCRL) in worst-case scenarios. First, TCRL introduces a worst-case-perceived cost constraint function that estimates safety costs under temporally coupled perturbations without the need to explicitly model adversarial attackers. Second, TCRL establishes a dual-constraint defense mechanism on the reward to counter temporally coupled adversaries while maintaining reward unpredictability. Experimental results demonstrate that TCRL consistently outperforms existing methods in terms of robustness against temporally coupled perturbation attacks across a variety of CRL tasks.
翻译:约束强化学习(CRL)旨在约束条件下优化决策策略,使其在自动驾驶、机器人学和电网管理等安全关键领域具有高度适用性。然而,现有鲁棒CRL方法主要关注单步扰动和时序独立的对抗模型,缺乏针对时序耦合扰动的鲁棒性显式建模。为应对这些挑战,我们提出了TCRL,一种面向最坏情况下鲁棒约束强化学习的新型时序耦合对抗训练框架。首先,TCRL引入了一种最坏情况感知成本约束函数,可在无需显式建模对抗攻击者的情况下,估计时序耦合扰动下的安全成本。其次,TCRL在奖励函数上建立了双重约束防御机制,以对抗时序耦合的对手,同时保持奖励的不可预测性。实验结果表明,在多种CRL任务中,TCRL在抵御时序耦合扰动攻击的鲁棒性方面持续优于现有方法。