The Adversarial Markov Decision Process (AMDP) is a learning framework that deals with unknown and varying tasks in decision-making applications like robotics and recommendation systems. A major limitation of the AMDP formalism, however, is pessimistic regret analysis results in the sense that although the cost function can change from one episode to the next, the evolution in many settings is not adversarial. To address this, we introduce and study a new variant of AMDP, which aims to minimize regret while utilizing a set of cost predictors. For this setting, we develop a new policy search method that achieves a sublinear optimistic regret with high probability, that is a regret bound which gracefully degrades with the estimation power of the cost predictors. Establishing such optimistic regret bounds is nontrivial given that (i) as we demonstrate, the existing importance-weighted cost estimators cannot establish optimistic bounds, and (ii) the feedback model of AMDP is different (and more realistic) than the existing optimistic online learning works. Our result, in particular, hinges upon developing a novel optimistically biased cost estimator that leverages cost predictors and enables a high-probability regret analysis without imposing restrictive assumptions. We further discuss practical extensions of the proposed scheme and demonstrate its efficacy numerically.
翻译:对抗性马尔可夫决策过程(AMDP)是一种学习框架,用于处理机器人技术和推荐系统等决策应用中未知且变化的任务。然而,AMDP形式体系的一个主要局限是悲观遗憾分析结果,即尽管代价函数可以在不同回合之间变化,但在许多场景中其演化并非对抗性的。为解决这一问题,我们引入并研究了一种新型AMDP变体,旨在通过利用一组代价预测器最小化遗憾。针对该设定,我们开发了一种新的策略搜索方法,能够以高概率实现次线性乐观遗憾界,即该遗憾界随代价预测器的估计能力优雅地退化。建立此类乐观遗憾界具有非平凡性,原因在于:(i)如我们所示,现有重要性加权代价估计器无法建立乐观界;(ii)AMDP的反馈模型与现有乐观在线学习工作不同(且更贴近实际)。我们的结果尤其依赖于开发一种新颖的乐观偏置代价估计器,该估计器利用代价预测器,使高概率遗憾分析无需施加限制性假设。我们进一步讨论了所提方案的实际扩展,并通过数值实验验证了其有效性。