We study the global convergence of a Fisher-Rao policy gradient flow for infinite-horizon entropy-regularised Markov decision processes with Polish state and action space. The flow is a continuous-time analogue of a policy mirror descent method. We establish the global well-posedness of the gradient flow and demonstrate its exponential convergence to the optimal policy. Moreover, we prove the flow is stable with respect to gradient evaluation, offering insights into the performance of a natural policy gradient flow with log-linear policy parameterisation. To overcome challenges stemming from the lack of the convexity of the objective function and the discontinuity arising from the entropy regulariser, we leverage the performance difference lemma and the duality relationship between the gradient and mirror descent flows. Our analysis provides a theoretical foundation for developing various discrete policy gradient algorithms.
翻译:本文研究了具有波兰状态空间与动作空间的无限时域熵正则化马尔可夫决策过程中Fisher-Rao策略梯度流的全局收敛性。该梯度流是策略镜像下降方法的连续时间类比。我们证明了该梯度流的全局适定性,并论证了其以指数速率收敛至最优策略。此外,我们证明了该梯度流在梯度评估方面具有稳定性,这为理解采用对数线性策略参数化的自然策略梯度流性能提供了理论依据。为克服目标函数非凸性及熵正则化项带来的不连续性等挑战,我们利用了性能差异引理以及梯度流与镜像下降流之间的对偶关系。本分析为开发各类离散策略梯度算法奠定了理论基础。