Bilevel reinforcement learning (RL), which features intertwined two-level problems, has attracted growing interest recently. The inherent non-convexity of the lower-level RL problem is, however, to be an impediment to developing bilevel optimization methods. By employing the fixed point equation associated with the regularized RL, we characterize the hyper-gradient via fully first-order information, thus circumventing the assumption of lower-level convexity. This, remarkably, distinguishes our development of hyper-gradient from the general AID-based bilevel frameworks since we take advantage of the specific structure of RL problems. Moreover, we design both model-based and model-free bilevel reinforcement learning algorithms, facilitated by access to the fully first-order hyper-gradient. Both algorithms enjoy the convergence rate $O(\epsilon^{-1})$. To extend the applicability, a stochastic version of the model-free algorithm is proposed, along with results on its iteration and sample complexity. In addition, numerical experiments demonstrate that the hyper-gradient indeed serves as an integration of exploitation and exploration.
翻译:双层强化学习(RL)具有两层相互交织的问题结构,近年来引起了日益增长的关注。然而,下层RL问题固有的非凸性一直是发展双层优化方法的障碍。通过利用与正则化RL相关的定点方程,我们借助完全一阶信息刻画了超梯度,从而规避了对下层凸性的假设。值得注意的是,这一超梯度开发方法有别于一般的基于AID的双层框架,因为我们利用了RL问题的特殊结构。此外,借助完全一阶超梯度,我们设计了基于模型和无模型的双层强化学习算法。两种算法均享有$O(\epsilon^{-1})$的收敛速率。为拓展适用性,我们进一步提出了无模型算法的随机版本,并给出了其迭代复杂度和样本复杂度的分析结果。数值实验表明,超梯度确实起到了整合利用与探索的作用。