Although average gain optimality is a commonly adopted performance measure in Markov Decision Processes (MDPs), it is often too asymptotic. Further incorporating measures of immediate losses leads to the hierarchy of bias optimalities, all the way up to Blackwell optimality. In this paper, we investigate the problem of identifying policies of such optimality orders. To that end, for each order, we construct a learning algorithm with vanishing probability of error. Furthermore, we characterize the class of MDPs for which identification algorithms can stop in finite time. That class corresponds to the MDPs with a unique Bellman optimal policy, and does not depend on the optimality order considered. Lastly, we provide a tractable stopping rule that when coupled to our learning algorithm triggers in finite time whenever it is possible to do so.
翻译:尽管平均收益最优性是马尔可夫决策过程中普遍采用的性能度量标准,但其往往过于渐近化。进一步纳入即时损失的考量,则形成了偏差最优性的层级体系,直至布莱克韦尔最优性。本文致力于探究具有此类最优性阶次策略的识别问题。为此,我们针对每一阶次构建了误差概率趋于零的学习算法。此外,我们刻画了可使识别算法在有限时间内停止的马尔可夫决策过程类别。该类对应着具有唯一贝尔曼最优策略的马尔可夫决策过程,且与所考察的最优性阶次无关。最后,我们提出了一种可计算的停止规则,当其与我们的学习算法结合时,将在可能实现有限时间停止的所有情况下触发终止。