Adversarial attacks pose significant challenges in many machine learning applications, particularly in the setting of distributed training and federated learning, where malicious agents seek to corrupt the training process with the goal of jeopardizing and compromising the performance and reliability of the final models. In this paper, we address the problem of robust federated learning in the presence of such attacks by formulating the training task as a bi-level optimization problem. We conduct a theoretical analysis of the resilience of consensus-based bi-level optimization (CB$^2$O), an interacting multi-particle metaheuristic optimization method, in adversarial settings. Specifically, we provide a global convergence analysis of CB$^2$O in mean-field law in the presence of malicious agents, demonstrating the robustness of CB$^2$O against a diverse range of attacks. Thereby, we offer insights into how specific hyperparameter choices enable to mitigate adversarial effects. On the practical side, we extend CB$^2$O to the clustered federated learning setting by proposing FedCB$^2$O, a novel interacting multi-particle system, and design a practical algorithm that addresses the demands of real-world applications. Extensive experiments demonstrate the robustness of the FedCB$^2$O algorithm against label-flipping attacks in decentralized clustered federated learning scenarios, showcasing its effectiveness in practical contexts.
翻译:对抗性攻击给许多机器学习应用带来了重大挑战,尤其是在分布式训练和联邦学习环境中,恶意参与者试图通过破坏训练过程来危及和损害最终模型的性能与可靠性。本文通过将训练任务表述为一个双层优化问题,解决了在此类攻击存在下的鲁棒联邦学习问题。我们对共识双层优化(CB$^2$O)——一种交互式多粒子元启发式优化方法——在对抗性环境下的韧性进行了理论分析。具体而言,我们在存在恶意参与者的条件下,基于平均场定律对CB$^2$O进行了全局收敛性分析,证明了CB$^2$O能够抵御多种类型的攻击。由此,我们揭示了特定超参数选择如何能够减轻对抗性影响。在实践层面,我们通过提出FedCB$^2$O——一种新颖的交互式多粒子系统——将CB$^2$O扩展至聚类联邦学习场景,并设计了一种满足实际应用需求的实用算法。大量实验表明,FedCB$^2$O算法在去中心化聚类联邦学习场景中能有效抵御标签翻转攻击,展现了其在实践环境中的有效性。