The paper concerns the $d$-dimensional stochastic approximation recursion, $$ \theta_{n+1}= \theta_n + \alpha_{n + 1} f(\theta_n, \Phi_{n+1}) $$ where $ \{ \Phi_n \}$ is a stochastic process on a general state space, satisfying a conditional Markov property that allows for parameter-dependent noise. The main results are established under additional conditions on the mean flow and a version of the Donsker-Varadhan Lyapunov drift condition known as (DV3): {(i)} An appropriate Lyapunov function is constructed that implies convergence of the estimates in $L_4$. {(ii)} A functional central limit theorem (CLT) is established, as well as the usual one-dimensional CLT for the normalized error. Moment bounds combined with the CLT imply convergence of the normalized covariance $\textsf{E} [ z_n z_n^T ]$ to the asymptotic covariance in the CLT, where $z_n{=:} (\theta_n-\theta^*)/\sqrt{\alpha_n}$. {(iii)} The CLT holds for the normalized version $z^{\text{PR}}_n{=:} \sqrt{n} [\theta^{\text{PR}}_n -\theta^*]$, of the averaged parameters $\theta^{\text{PR}}_n {=:} n^{-1} \sum_{k=1}^n\theta_k$, subject to standard assumptions on the step-size. Moreover, the covariance in the CLT coincides with the minimal covariance of Polyak and Ruppert. {(iv)} An example is given where $f$ and $\bar{f}$ are linear in $\theta$, and $\Phi$ is a geometrically ergodic Markov chain but does not satisfy (DV3). While the algorithm is convergent, the second moment of $\theta_n$ is unbounded and in fact diverges. {\bf This arXiv version 3 represents a major extension of the results in prior versions.} The main results now allow for parameter-dependent noise, as is often the case in applications to reinforcement learning.
翻译:本文研究$d$维随机逼近递推式:$$ \theta_{n+1}= \theta_n + \alpha_{n + 1} f(\theta_n, \Phi_{n+1}) $$ 其中$\{ \Phi_n \}$是定义在一般状态空间上的随机过程,满足允许参数依赖噪声的条件马尔可夫性质。主要结果在均值流附加条件及称为(DV3)的Donsker-Varadhan Lyapunov漂移条件版本下建立:{(i)} 构造了恰当的Lyapunov函数,可推得估计量在$L_4$意义下的收敛性。{(ii)} 建立了泛函中心极限定理(CLT)以及归一化误差的常规一维CLT。矩界与CLT结合可推得归一化协方差$\textsf{E} [ z_n z_n^T ]$收敛于CLT中的渐近协方差,其中$z_n{=:} (\theta_n-\theta^*)/\sqrt{\alpha_n}$。{(iii)} 在步长的标准假设下,平均参数$\theta^{\text{PR}}_n {=:} n^{-1} \sum_{k=1}^n\theta_k$的归一化版本$z^{\text{PR}}_n{=:} \sqrt{n} [\theta^{\text{PR}}_n -\theta^*]$满足CLT。且CLT中的协方差与Polyak-Ruppert最小协方差一致。{(iv)} 给出示例:其中$f$与$\bar{f}$关于$\theta$线性,$\Phi$是几何遍历马尔可夫链但不满足(DV3)。算法虽收敛,但$\theta_n$的二阶矩无界且实际发散。{\bf 此arXiv第三版是对先前版本结果的重大扩展。} 主要结果现允许参数依赖噪声,这在强化学习应用中尤为常见。