Several recent works have focused on carrying out non-asymptotic convergence analyses for AC algorithms. Recently, a two-timescale critic-actor algorithm has been presented for the discounted cost setting in the look-up table case where the timescales of the actor and the critic are reversed and only asymptotic convergence shown. In our work, we present the first two-timescale critic-actor algorithm with function approximation in the long-run average reward setting and present the first finite-time non-asymptotic as well as asymptotic convergence analysis for such a scheme. We obtain optimal learning rates and prove that our algorithm achieves a sample complexity of {$\mathcal{\tilde{O}}(\epsilon^{-(2+\delta)})$ with $\delta >0$ arbitrarily close to zero,} for the mean squared error of the critic to be upper bounded by $\epsilon$ which is better than the one obtained for two-timescale AC in a similar setting. A notable feature of our analysis is that we present the asymptotic convergence analysis of our scheme in addition to the finite-time bounds that we obtain and show the almost sure asymptotic convergence of the (slower) critic recursion to the attractor of an associated differential inclusion with actor parameters corresponding to local maxima of a perturbed average reward objective. We also show the results of numerical experiments on three benchmark settings and observe that our critic-actor algorithm performs the best amongst all algorithms.
翻译:近期若干研究聚焦于对AC算法进行非渐近收敛性分析。最近,针对查找表情况下的折扣成本设定,研究者提出了一种双时间尺度评论者-执行者算法,其中执行者与评论者的时间尺度被反转,但仅证明了渐近收敛性。在本研究中,我们首次提出了长期平均奖励设定下采用函数逼近的双时间尺度评论者-执行者算法,并对此类方案进行了首次有限时间非渐近及渐近收敛性分析。我们获得了最优学习率,并证明当评论者均方误差上界为ε时,算法达到{$\mathcal{\tilde{O}}(\epsilon^{-(2+\delta)})$ 其中δ>0可任意趋近于零}的样本复杂度,该结果优于类似设定下双时间尺度AC算法所得结论。我们分析的一个显著特点是:在获得有限时间界的同时,还给出了方案的渐近收敛性分析,证明了(较慢的)评论者递归几乎必然渐近收敛至相应微分包含的吸引子,此时执行者参数对应于扰动平均奖励目标的局部极大值点。通过在三个基准设定上的数值实验,我们观察到所提出的评论者-执行者算法在所有对比算法中表现最优。