Reinforcement learning lacks a principled measure of optimality, causing research to rely on algorithm-to-algorithm or baselines comparisons with no certificate of optimality. Focusing on finite state and action Markov decision processes (MDP), we develop a simple, computable gap function that provides both upper and lower bounds on the optimality gap. Therefore, convergence of the gap function is a stronger mode of convergence than convergence of the optimality gap, and it is equivalent to a new notion we call distribution-free convergence, where convergence is independent of any problem-dependent distribution. We show the basic policy mirror descent exhibits fast distribution-free convergence for both the deterministic and stochastic setting. We leverage the distribution-free convergence to a uncover a couple new results. First, the deterministic policy mirror descent can solve unregularized MDPs in strongly-polynomial time. Second, accuracy estimates can be obtained with no additional samples while running stochastic policy mirror descent and can be used as a termination criteria, which can be verified in the validation step.
翻译:强化学习缺乏一个原则性的最优性度量,导致研究依赖于算法间或基线的比较,而无法提供最优性证明。针对有限状态和动作的马尔可夫决策过程(MDP),我们开发了一个简单可计算的间隙函数,该函数能同时提供最优性间隙的上界和下界。因此,间隙函数的收敛是一种比最优性间隙收敛更强的收敛模式,且等价于我们提出的新概念——分布无关收敛,即收敛性独立于任何与问题相关的分布。我们证明基础策略镜像下降在确定性和随机性设置下均具有快速的分布无关收敛性。我们利用分布无关收敛性揭示了两项新结果:首先,确定性策略镜像下降能以强多项式时间求解无正则化MDP;其次,在运行随机策略镜像下降时无需额外采样即可获得精度估计,该估计可用作终止准则,并可在验证步骤中进行检验。