To date, distributional reinforcement learning (distributional RL) methods have exclusively focused on the discounted setting, where an agent aims to optimize a discounted sum of rewards over time. In this work, we extend distributional RL to the average-reward setting, where an agent aims to optimize the reward received per time step. In particular, we utilize a quantile-based approach to develop the first set of algorithms that can successfully learn and/or optimize the long-run per-step reward distribution, as well as the differential return distribution of an average-reward MDP. We derive proven-convergent tabular algorithms for both prediction and control, as well as a broader family of algorithms that have appealing scaling properties. Empirically, we find that these algorithms yield competitive and sometimes superior performance when compared to their non-distributional equivalents, while also capturing rich information about the long-run per-step reward and differential return distributions.
翻译:迄今为止,分布强化学习方法仅关注于折扣设定,即智能体旨在优化随时间折扣的奖励总和。在本工作中,我们将分布强化学习扩展到平均奖励设定,其中智能体旨在优化每时间步获得的奖励。具体而言,我们采用基于分位数的方法,开发了第一套能够成功学习和/或优化长期每步奖励分布以及平均奖励马尔可夫决策过程的差分回报分布的算法。我们推导出用于预测和控制的经证明收敛的表格算法,以及一个具有良好扩展性的更广泛的算法家族。实证结果表明,与对应的非分布算法相比,这些算法在产生竞争性甚至有时更优性能的同时,还能捕获关于长期每步奖励和差分回报分布的丰富信息。