Deploying reinforcement learning (RL) systems requires robustness to uncertainty and model misspecification, yet prior robust RL methods typically only study noise introduced independently across time. However, practical sources of uncertainty are usually coupled across time. We formally introduce temporally-coupled perturbations, presenting a novel challenge for existing robust RL methods. To tackle this challenge, we propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially observable two-player zero-sum game. By finding an approximate equilibrium within this game, GRAD optimizes for general robustness against temporally-coupled perturbations. Experiments on continuous control tasks demonstrate that, compared with prior methods, our approach achieves a higher degree of robustness to various types of attacks on different attack domains, both in settings with temporally-coupled perturbations and decoupled perturbations.
翻译:部署强化学习系统需要应对不确定性和模型误设的鲁棒性,然而先前鲁棒强化学习方法通常仅研究时间上独立引入的噪声。但实际中的不确定性来源通常存在跨时间耦合。我们正式提出了时间耦合扰动,为现有鲁棒强化学习方法带来了全新挑战。为应对这一挑战,我们提出GRAD——一种新颖的博弈论方法,将时间耦合鲁棒强化学习问题建模为部分可观测的双人零和博弈。通过在该博弈中寻找近似均衡,GRAD能针对时间耦合扰动实现通用鲁棒性优化。在连续控制任务上的实验表明,与先前方法相比,我们的方法在不同攻击域的各种攻击类型下均展现出更优的鲁棒性,无论面对的是时间耦合扰动还是解耦扰动场景。