We establish a framework of distributed random inverse problems over network graphs with online measurements, and propose a decentralized online learning algorithm. This unifies the distributed parameter estimation in Hilbert spaces and the least mean square problem in reproducing kernel Hilbert spaces (RKHS-LMS). We transform the convergence of the algorithm into the asymptotic stability of a class of inhomogeneous random difference equations in Hilbert spaces with L2-bounded martingale difference terms and develop the L2 -asymptotic stability theory in Hilbert spaces. It is shown that if the network graph is connected and the sequence of forward operators satisfies the infinite-dimensional spatio-temporal persistence of excitation condition, then the estimates of all nodes are mean square and almost surely strongly consistent. Moreover, we propose a decentralized online learning algorithm in RKHS based on non-stationary and non-independent online data streams, and prove that the algorithm is mean square and almost surely strongly consistent if the operators induced by the random input data satisfy the infinite-dimensional spatio-temporal persistence of excitation condition.
翻译:我们建立了一个基于网络图在线测量的分布式随机逆问题框架,并提出了一种去中心化在线学习算法。该框架统一了希尔伯特空间中的分布式参数估计与再生核希尔伯特空间中的最小均方问题。我们将算法的收敛性转化为一类具有L2有界鞅差项的希尔伯特空间非齐次随机差分方程的渐近稳定性问题,并发展了希尔伯特空间中的L2渐近稳定性理论。结果表明,若网络图连通且前向算子序列满足无限维时空持续激励条件,则所有节点的估计值均具有均方及几乎必然强一致性。此外,我们提出了一种基于非平稳非独立在线数据流的再生核希尔伯特空间去中心化在线学习算法,并证明若随机输入数据诱导的算子满足无限维时空持续激励条件,该算法具有均方及几乎必然强一致性。