While there is an extensive body of research analyzing policy gradient methods for discounted cumulative-reward MDPs, prior work on policy gradient methods for average-reward MDPs has been limited, with most existing results restricted to ergodic or unichain settings. In this work, we first establish a policy gradient theorem for average-reward multichain MDPs based on the invariance of the classification of recurrent and transient states. Building on this foundation, we develop refined analyses and obtain a collection of convergence and sample-complexity results that advance the understanding of this setting. In particular, we show that the proposed $α$-clipped policy mirror ascent algorithm attains an $ε$-optimal policy with respect to positive policies.
翻译:尽管已有大量研究分析了针对折扣累积奖励马尔可夫决策过程的策略梯度方法,但针对平均奖励马尔可夫决策过程的策略梯度方法先前研究有限,且现有成果大多局限于遍历或单链设定。在本工作中,我们首先基于常返态与瞬态分类的不变性,为平均奖励多链马尔可夫决策过程建立了策略梯度定理。在此基础上,我们发展了精细分析,并获得了一系列收敛性与样本复杂度结果,从而推进了对该设定理解的深化。特别地,我们证明了所提出的 $α$-截断策略镜像上升算法能够针对正策略获得 $ε$-最优策略。