Q-learning is a widely used algorithm in reinforcement learning (RL), but its convergence can be slow, especially when the discount factor is close to one. Successive Over-Relaxation (SOR) Q-learning, which introduces a relaxation factor to speed up convergence, addresses this issue but has two major limitations: In the tabular setting, the relaxation parameter depends on transition probability, making it not entirely model-free, and it suffers from overestimation bias. To overcome these limitations, we propose a sample-based, model-free double SOR Q-learning algorithm. Theoretically and empirically, this algorithm is shown to be less biased than SOR Q-learning. Further, in the tabular setting, the convergence analysis under boundedness assumptions on iterates is discussed. The proposed algorithm is extended to large-scale problems using deep RL. Finally, the tabular version of the proposed algorithm is compared using roulette and grid world environments, while the deep RL version is tested on a maximization bias example and OpenAI Gym environments.
翻译:Q学习是强化学习(RL)中广泛使用的算法,但其收敛速度可能较慢,尤其在折扣因子接近1时。连续超松弛(SOR)Q学习通过引入松弛因子来加速收敛,但存在两个主要局限:在表格设定中,松弛参数依赖于转移概率,使其并非完全免模型,且存在高估偏差。为克服这些局限,我们提出一种基于样本、免模型的双SOR Q学习算法。理论与实验表明,该算法比SOR Q学习具有更低的偏差。此外,在表格设定中,我们讨论了迭代值有界假设下的收敛性分析。所提算法通过深度RL扩展至大规模问题。最后,在轮盘赌和网格世界环境中比较了所提算法的表格版本,并在高估偏差示例及OpenAI Gym环境中测试了其深度RL版本。