This work introduces a novel and efficient Bayesian federated learning algorithm, namely, the Federated Averaging stochastic Hamiltonian Monte Carlo (FA-HMC), for parameter estimation and uncertainty quantification. We establish rigorous convergence guarantees of FA-HMC on non-iid distributed data sets, under the strong convexity and Hessian smoothness assumptions. Our analysis investigates the effects of parameter space dimension, noise on gradients and momentum, and the frequency of communication (between the central node and local nodes) on the convergence and communication costs of FA-HMC. Beyond that, we establish the tightness of our analysis by showing that the convergence rate cannot be improved even for continuous FA-HMC process. Moreover, extensive empirical studies demonstrate that FA-HMC outperforms the existing Federated Averaging-Langevin Monte Carlo (FA-LD) algorithm.
翻译:本文提出了一种新颖且高效的贝叶斯联邦学习算法,即联邦平均随机哈密顿蒙特卡洛(FA-HMC),用于参数估计与不确定性量化。我们在强凸性与海森矩阵光滑性假设下,为FA-HMC在非独立同分布数据集上建立了严格的收敛性保证。我们的分析探究了参数空间维度、梯度与动量噪声以及通信频率(中央节点与本地节点之间)对FA-HMC收敛性与通信成本的影响。此外,通过证明即使对于连续FA-HMC过程其收敛速率也无法进一步提升,我们确立了所提分析方法的紧致性。进一步的实证研究表明,FA-HMC在性能上优于现有的联邦平均朗之万蒙特卡洛(FA-LD)算法。