Federated learning, a novel paradigm designed to protect data privacy, is vulnerable to backdoor attacks due to its distributed nature. Current research often designs attacks based on a single attacker with a single backdoor, overlooking more realistic and complex threats in federated learning. We propose a more practical threat model for federated learning: the distributed multi-target backdoor. In this model, multiple attackers control different clients, embedding various triggers and targeting different classes, collaboratively implanting backdoors into the global model via central aggregation. Empirical validation shows that existing methods struggle to maintain the effectiveness of multiple backdoors in the global model. Our key insight is that similar backdoor triggers cause parameter conflicts and injecting new backdoors disrupts gradient directions, significantly weakening some backdoors performance. To solve this, we propose a Distributed Multi-Target Backdoor Attack (DMBA), ensuring efficiency and persistence of backdoors from different malicious clients. To avoid parameter conflicts, we design a multi-channel dispersed frequency trigger strategy to maximize trigger differences. To mitigate gradient interference, we introduce backdoor replay in local training to neutralize conflicting gradients. Extensive validation shows that 30 rounds after the attack, Attack Success Rates of three different backdoors from various clients remain above 93%. The code will be made publicly available after the review period.
翻译:联邦学习作为一种旨在保护数据隐私的新型范式,由于其分布式特性容易受到后门攻击。当前研究通常基于单一攻击者携带单一后门设计攻击方案,忽视了联邦学习中更现实且复杂的威胁。我们提出了一种更贴近实际的联邦学习威胁模型:分布式多目标后门。在该模型中,多个攻击者控制不同客户端,嵌入多种触发器并针对不同类别,通过中心化聚合协作地将后门植入全局模型。实验验证表明,现有方法难以在全局模型中维持多个后门的有效性。我们的核心发现是:相似的后门触发器会导致参数冲突,而注入新后门会扰乱梯度方向,从而显著削弱部分后门的性能。为解决这一问题,我们提出了分布式多目标后门攻击(DMBA),确保来自不同恶意客户端的后门具有高效性与持久性。为避免参数冲突,我们设计了多通道分散频率触发器策略以最大化触发器差异。为减轻梯度干扰,我们在本地训练中引入后门重放机制以中和冲突梯度。大量验证表明,攻击发起30轮后,来自不同客户端的三种后门攻击成功率均保持在93%以上。代码将在评审期结束后公开。