Federated learning (FL) has emerged as a promising paradigm for decentralized model training, enabling multiple clients to collaboratively learn a shared model without exchanging their local data. However, the decentralized nature of FL also introduces vulnerabilities, as malicious clients can compromise or manipulate the training process. In this work, we introduce dictator clients, a novel, well-defined, and analytically tractable class of malicious participants capable of entirely erasing the contributions of all other clients from the server model, while preserving their own. We propose concrete attack strategies that empower such clients and systematically analyze their effects on the learning process. Furthermore, we explore complex scenarios involving multiple dictator clients, including cases where they collaborate, act independently, or form an alliance in order to ultimately betray one another. For each of these settings, we provide a theoretical analysis of their impact on the global model's convergence. Our theoretical algorithms and findings about the complex scenarios including multiple dictator clients are further supported by empirical evaluations on both computer vision and natural language processing benchmarks.
翻译:联邦学习(Federated Learning, FL)已成为一种前景广阔的分布式模型训练范式,它允许多个客户端在不交换本地数据的情况下协作学习一个共享模型。然而,联邦学习的去中心化特性也带来了脆弱性,恶意客户端可能破坏或操纵训练过程。在这项工作中,我们引入了独裁客户端,这是一类新颖、定义明确且可分析处理的恶意参与者,其能够完全抹除所有其他客户端对服务器模型的贡献,同时保留自身的贡献。我们提出了赋能此类客户端的具体攻击策略,并系统性地分析了它们对学习过程的影响。此外,我们探讨了涉及多个独裁客户端的复杂场景,包括它们相互协作、独立行动或结成联盟最终却彼此背叛的情况。针对每一种设定,我们都提供了其对全局模型收敛影响的理论分析。我们关于多个独裁客户端复杂场景的理论算法与发现,进一步得到了在计算机视觉和自然语言处理基准测试上的实证评估的支持。