This paper proposes and analyzes two new policy learning methods: regularized policy gradient (RPG) and iterative policy optimization (IPO), for a class of discounted linear-quadratic control (LQC) problems over an infinite time horizon with entropy regularization. Assuming access to the exact policy evaluation, both proposed approaches are proven to converge linearly in finding optimal policies of the regularized LQC. Moreover, the IPO method can achieve a super-linear convergence rate once it enters a local region around the optimal policy. Finally, when the optimal policy for an RL problem with a known environment is appropriately transferred as the initial policy to an RL problem with an unknown environment, the IPO method is shown to enable a super-linear convergence rate if the two environments are sufficiently close. Performances of these proposed algorithms are supported by numerical examples.
翻译:本文针对一类具有熵正则化的无限时域折扣线性二次控制问题,提出并分析了两种新的策略学习方法:正则化策略梯度与迭代策略优化。在假设能够进行精确策略评估的前提下,所提出的两种方法均被证明在寻找正则化LQC问题的最优策略时具有线性收敛性。此外,IPO方法一旦进入最优策略的局部邻域,即可实现超线性收敛速率。最后,当将已知环境强化学习问题的最优策略适当迁移作为未知环境强化学习问题的初始策略时,若两个环境足够接近,IPO方法能够实现超线性收敛速率。数值算例验证了所提算法的性能。