In this work, we study a natural nonparametric estimator of the transition probability matrices of a finite controlled Markov chain. We consider an offline setting with a fixed dataset, collected using a so-called logging policy. We develop sample complexity bounds for the estimator and establish conditions for minimaxity. Our statistical bounds depend on the logging policy through its mixing properties. We show that achieving a particular statistical risk bound involves a subtle and interesting trade-off between the strength of the mixing properties and the number of samples. We demonstrate the validity of our results under various examples, such as ergodic Markov chains, weakly ergodic inhomogeneous Markov chains, and controlled Markov chains with non-stationary Markov, episodic, and greedy controls. Lastly, we use these sample complexity bounds to establish concomitant ones for offline evaluation of stationary Markov control policies.
翻译:在本工作中,我们研究了一种用于估计有限受控马尔可夫链转移概率矩阵的自然非参数估计器。我们考虑一种离线设置,其中使用所谓的记录策略收集固定数据集。我们为该估计器建立了样本复杂度界限,并确立了极小极大性的条件。我们的统计界限通过记录策略的混合性质依赖于该策略。我们证明,实现特定的统计风险界限涉及混合性质强度与样本数量之间微妙而有趣的权衡。我们在各种示例下验证了结果的有效性,例如遍历马尔可夫链、弱遍历非齐次马尔可夫链,以及具有非平稳马尔可夫控制、片段式控制和贪婪控制的受控马尔可夫链。最后,我们利用这些样本复杂度界限,为平稳马尔可夫控制策略的离线评估建立了相应的界限。