Personalized Bayesian federated learning (PBFL) handles non-i.i.d. client data and quantifies uncertainty by combining personalization with Bayesian inference. However, existing PBFL methods face two limitations: restrictive parametric assumptions in client posterior inference and naive parameter averaging for server aggregation. To overcome these issues, we propose FedWBA, a novel PBFL method that enhances both local inference and global aggregation. At the client level, we use particle-based variational inference for nonparametric posterior representation. At the server level, we introduce particle-based Wasserstein barycenter aggregation, offering a more geometrically meaningful approach. Theoretically, we provide local and global convergence guarantees for FedWBA. Locally, we prove a KL divergence decrease lower bound per iteration for variational inference convergence. Globally, we show that the Wasserstein barycenter converges to the true parameter as the client data size increases. Empirically, experiments show that FedWBA outperforms baselines in prediction accuracy, uncertainty calibration, and convergence rate, with ablation studies confirming its robustness.
翻译:个性化贝叶斯联邦学习(PBFL)通过将个性化与贝叶斯推断相结合,处理非独立同分布的客户端数据并量化不确定性。然而,现有PBFL方法面临两个局限:客户端后验推断中的限制性参数假设,以及服务器聚合时采用的朴素参数平均方法。为克服这些问题,我们提出FedWBA——一种新颖的PBFL方法,同时改进了局部推断与全局聚合。在客户端层面,我们采用基于粒子的变分推断实现非参数后验表示。在服务器层面,我们引入基于粒子的Wasserstein重心聚合,提供更具几何意义的聚合方式。理论上,我们为FedWBA提供了局部与全局收敛性保证。局部层面,我们证明了变分推断每轮迭代的KL散度下降下界。全局层面,我们证明当客户端数据量增加时,Wasserstein重心会收敛至真实参数。实证实验中,FedWBA在预测精度、不确定性校准和收敛速度方面均优于基线方法,消融研究也验证了其鲁棒性。