Federated learning (FL) enables multiple clients to collaboratively train a global model without sharing their local data. Recent studies have highlighted the vulnerability of FL to Byzantine attacks, where malicious clients send poisoned updates to degrade model performance. Notably, many attacks have been developed targeting specific aggregation rules, whereas various defense mechanisms have been designed for dedicated threat models. This paper studies the resilience of an attack-agnostic FL scenario, where the server lacks prior knowledge of both the attackers' strategies and the number of malicious clients involved. We first introduce a hybrid defense against state-of-the-art attacks. Our goal is to identify a general-purpose aggregation rule that performs well on average while also avoiding worst-case vulnerabilities. By adaptively selecting from available defenses, we demonstrate that the server remains robust even when confronted with a substantial proportion of poisoned updates. To better understand this resilience, we then assess the attackers' capability using a proxy called client heterogeneity. We also emphasize that the existing FL defenses should not be regarded as secure, as demonstrated through the newly proposed Trapsetter attack. The proposed attack outperforms other state-of-the-art attacks by further reducing the model test accuracy by 8-10%. Our findings highlight the ongoing need for the development of Byzantine-resilient aggregation algorithms in FL.
翻译:联邦学习(FL)允许多个客户端在不共享本地数据的情况下协作训练全局模型。近期研究指出,FL易受拜占庭攻击的影响,恶意客户端通过发送投毒更新以降低模型性能。值得注意的是,现有攻击多针对特定聚合规则设计,而各类防御机制则面向特定威胁模型构建。本文研究了一种攻击无关的FL场景的鲁棒性,其中服务器既无攻击者策略的先验知识,亦不知晓恶意客户端的数量。我们首先提出了一种针对前沿攻击的混合防御方法。我们的目标是找到一种通用聚合规则,使其在平均情况下表现良好,同时避免最坏情况下的脆弱性。通过自适应地从现有防御机制中进行选择,我们证明即使面对大量投毒更新,服务器仍能保持鲁棒性。为深入理解这种鲁棒性,我们随后使用客户端异质性这一代理指标评估攻击者的能力。我们还强调,现有FL防御机制不应被视为绝对安全,新提出的Trapsetter攻击便证明了这一点。该攻击通过将模型测试精度进一步降低8-10%,性能优于其他前沿攻击方法。我们的研究结果凸显了在FL中持续开发拜占庭鲁棒聚合算法的迫切需求。