Federated learning combines local updates from clients to produce a global model, which is susceptible to poisoning attacks. Most previous defense strategies relied on vectors derived from projections of local updates on a Euclidean space; however, these methods fail to accurately represent the functionality and structure of local models, resulting in inconsistent performance. Here, we present a new paradigm to defend against poisoning attacks in federated learning using functional mappings of local models based on intermediate outputs. Experiments show that our mechanism is robust under a broad range of computing conditions and advanced attack scenarios, enabling safer collaboration among data-sensitive participants via federated learning.
翻译:联邦学习将各客户端的局部更新聚合为全局模型,这一过程容易遭受投毒攻击。以往多数防御策略依赖于局部更新在欧氏空间上的投影向量,但这些方法难以准确表征局部模型的功能和结构,导致表现不一致。本文提出了一种新范式,利用基于中间输出的局部模型函数映射来防御联邦学习中的投毒攻击。实验表明,在广泛的计算条件和高级攻击场景下,我们的机制均具有鲁棒性,使得数据敏感型参与者能够通过联邦学习实现更安全的协作。