We model a Markov decision process, parametrized by an unknown parameter, and study the asymptotic behavior of a sampling-based algorithm, called Thompson sampling. The standard definition of regret is not always suitable to evaluate a policy, especially when the underlying chain structure is general. We show that the standard (expected) regret can grow (super-)linearly and fails to capture the notion of learning in realistic settings with non-trivial state evolution. By decomposing the standard (expected) regret, we develop a new metric, called the expected residual regret, which forgets the immutable consequences of past actions. Instead, it measures regret against the optimal reward moving forward from the current period. We show that the expected residual regret of the Thompson sampling algorithm is upper bounded by a term which converges exponentially fast to 0. We present conditions under which the posterior sampling error of Thompson sampling converges to 0 almost surely. We then introduce the probabilistic version of the expected residual regret and present conditions under which it converges to 0 almost surely. Thus, we provide a viable concept of learning for sampling algorithms which will serve useful in broader settings than had been considered previously.
翻译:我们考虑一个由未知参数参数化的马尔可夫决策过程,并研究基于采样的算法——汤普森采样的渐近行为。当底层链结构具有一般性时,标准遗憾定义并不总能有效评估策略。我们证明标准(期望)遗憾可能呈现(超)线性增长,且在非平凡状态演化的现实场景中无法捕捉学习的概念。通过分解标准(期望)遗憾,我们提出一种新指标——期望剩余遗憾,该指标能遗忘过去决策的不可逆后果,转而从当前时刻起衡量与最优奖励相比的遗憾。我们证明汤普森采样算法的期望剩余遗憾存在一个以指数速度收敛至0的上界。我们给出汤普森采样的后验采样误差几乎必然收敛至0的条件,并进一步提出期望剩余遗憾的概率版本,并给出其几乎必然收敛至0的条件。因此,我们为采样算法提供了可行的学习概念,该概念适用于比既往研究更广泛的场景。