In recent work it is shown that Q-learning with linear function approximation is stable, in the sense of bounded parameter estimates, under the $(\varepsilon,κ)$-tamed Gibbs policy; $κ$ is inverse temperature, and $\varepsilon>0$ is introduced for additional exploration. Under these assumptions it also follows that there is a solution to the projected Bellman equation (PBE). Left open is uniqueness of the solution, and criteria for convergence outside of the standard tabular or linear MDP settings. The present work extends these results to other variants of Q-learning, and clarifies prior work: a one dimensional example shows that under an oblivious policy for training there may be no solution to the PBE, or multiple solutions, and in each case the algorithm is not stable under oblivious training. The main contribution is that far more structure is required for convergence. An example is presented for which the basis is ideal, in the sense that the true Q-function is in the span of the basis. However, there are two solutions to the PBE under the greedy policy, and hence also for the $(\varepsilon,κ)$-tamed Gibbs policy for all sufficiently small $\varepsilon>0$ and $κ\ge 1$.
翻译:近期研究表明,采用线性函数逼近的Q-learning在$(\varepsilon,κ)$-驯服吉布斯策略下具有稳定性(即参数估计有界);其中$κ$为逆温度参数,$\varepsilon>0$用于增强探索性。在此假设下,投影贝尔曼方程(PBE)解的存在性亦得以证明。然而解的唯一性,以及在标准表格型或线性MDP设定之外的收敛准则仍未解决。本研究将上述结果推广至Q-learning的其他变体,并澄清了先前工作:通过一维示例证明,在训练采用无感知策略时,PBE可能无解或存在多解,且无论何种情况算法在无感知训练下均不稳定。主要贡献在于揭示了收敛需要更丰富的结构条件。本文展示了一个基函数理想的示例——真实Q函数位于基函数张成的空间内。然而在贪婪策略下PBE存在两个解,这意味着对于所有充分小的$\varepsilon>0$和$κ\ge 1$,$(\varepsilon,κ)$-驯服吉布斯策略下同样存在多解现象。