We scrutinize the resilience of the partial-sharing online federated learning (PSO-Fed) algorithm against model-poisoning attacks. PSO-Fed reduces the communication load by enabling clients to exchange only a fraction of their model estimates with the server at each update round. Partial sharing of model estimates also enhances the robustness of the algorithm against model-poisoning attacks. To gain better insights into this phenomenon, we analyze the performance of the PSO-Fed algorithm in the presence of Byzantine clients, malicious actors who may subtly tamper with their local models by adding noise before sharing them with the server. Through our analysis, we demonstrate that PSO-Fed maintains convergence in both mean and mean-square senses, even under the strain of model-poisoning attacks. We further derive the theoretical mean square error (MSE) of PSO-Fed, linking it to various parameters such as stepsize, attack probability, number of Byzantine clients, client participation rate, partial-sharing ratio, and noise variance. We also show that there is a non-trivial optimal stepsize for PSO-Fed when faced with model-poisoning attacks. The results of our extensive numerical experiments affirm our theoretical assertions and highlight the superior ability of PSO-Fed to counteract Byzantine attacks, outperforming other related leading algorithms.
翻译:我们审视了部分共享在线联邦学习(PSO-Fed)算法在抵御模型投毒攻击方面的弹性。PSO-Fed通过允许客户端在每轮更新时仅与服务器交换其模型估计的一部分,从而降低了通信负载。模型估计的部分共享也增强了算法抗模型投毒攻击的鲁棒性。为深入理解这一现象,我们分析了存在拜占庭客户端(即可能通过向本地模型添加噪声再共享给服务器来微妙篡改模型的恶意行为者)时PSO-Fed算法的性能。通过分析,我们证明即使在模型投毒攻击压力下,PSO-Fed在均值和均方意义上仍保持收敛性。我们进一步推导了PSO-Fed的理论均方误差(MSE),并将其与步长、攻击概率、拜占庭客户端数量、客户端参与率、部分共享比率及噪声方差等参数关联。我们还表明,当面临模型投毒攻击时,PSO-Fed存在一个非平凡的最优步长。大规模数值实验的结果证实了我们的理论论断,并突显了PSO-Fed在抗衡拜占庭攻击方面优于其他相关领先算法的卓越能力。