We propose a decentralized online learning algorithm for distributed random inverse problems over network graphs with online measurements, and unifies the distributed parameter estimation in Hilbert spaces and the least mean square problem in reproducing kernel Hilbert spaces (RKHS-LMS). We transform the convergence of the algorithm into the asymptotic stability of a class of inhomogeneous random difference equations in Hilbert spaces with $L_{2}$-bounded martingale difference terms and develop the $L_2$-asymptotic stability theory in Hilbert spaces. We show that if the network graph is connected and the sequence of forward operators satisfies the infinite-dimensional spatio-temporal persistence of excitation condition, then the estimates of all nodes are mean square and almost surely strongly consistent. Moreover, we propose a decentralized online learning algorithm in RKHS based on non-stationary online data streams, and prove that the algorithm is mean square and almost surely strongly consistent if the operators induced by the random input data satisfy the infinite-dimensional spatio-temporal persistence of excitation condition.
翻译:本文提出了一种针对具有在线测量的网络图分布式随机逆问题的去中心化在线学习算法,该算法统一了希尔伯特空间中的分布式参数估计与再生核希尔伯特空间中的最小均方问题。我们将算法的收敛性转化为一类具有$L_{2}$有界鞅差项的希尔伯特空间非齐次随机差分方程的渐近稳定性问题,并建立了希尔伯特空间中的$L_2$渐近稳定性理论。我们证明,若网络图连通且前向算子序列满足无限维时空持续激励条件,则所有节点的估计值均具有均方意义及几乎必然意义下的强一致性。进一步,我们提出了一种基于非平稳在线数据流的再生核希尔伯特空间去中心化在线学习算法,并证明若随机输入数据导出的算子满足无限维时空持续激励条件,则该算法具有均方意义及几乎必然意义下的强一致性。