We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we establish a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. To balance the exploration-exploitation trade-off, we propose an upper confidence bound-based algorithm. We show that our proposed algorithm achieves $\tilde{O}(d \sqrt{H^3 T})$ regret bound where $d$ is the dimension of the transition core, $H$ is the horizon, and $T$ is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.
翻译:我们研究了基于模型的强化学习(RL),用于处理转移概率由未知转移核(包含状态和动作特征)参数化的分段马尔可夫决策过程(MDP)。尽管近期在线性MDP设定下的算法分析取得了诸多进展,但对更一般转移模型的理解仍非常有限。本文针对状态转移由多项逻辑模型描述的MDP,提出了一种具有可证明效率的RL算法。为平衡探索与利用的权衡,我们提出了一种基于置信上界的算法。我们证明该算法实现了$\tilde{O}(d \sqrt{H^3 T})$的遗憾界,其中$d$是转移核的维度,$H$是决策步长,$T$是总步数。据我们所知,这是首个具有可证明理论保证的、基于多项逻辑函数逼近的模型强化学习算法。我们还通过数值实验全面评估了所提算法,结果表明其持续优于现有方法,从而同时实现了理论可证的高效性与实际优越性能。