Federated Learning (FL) is a distributed machine learning approach that maintains data privacy by training on decentralized data sources. Similar to centralized machine learning, FL is also susceptible to backdoor attacks. Most backdoor attacks in FL assume a predefined target class and require control over a large number of clients or knowledge of benign clients' information. Furthermore, they are not imperceptible and are easily detected by human inspection due to clear artifacts left on the poison data. To overcome these challenges, we propose Venomancer, an effective backdoor attack that is imperceptible and allows target-on-demand. Specifically, imperceptibility is achieved by using a visual loss function to make the poison data visually indistinguishable from the original data. Target-on-demand property allows the attacker to choose arbitrary target classes via conditional adversarial training. Additionally, experiments showed that the method is robust against state-of-the-art defenses such as Norm Clipping, Weak DP, Krum, and Multi-Krum. The source code is available at https://anonymous.4open.science/r/Venomancer-3426.
翻译:联邦学习(FL)是一种分布式机器学习方法,通过在分散的数据源上进行训练来维护数据隐私。与集中式机器学习类似,FL 也容易受到后门攻击。FL 中的大多数后门攻击都假定了一个预定义的目标类别,并且需要控制大量客户端或了解良性客户端的信息。此外,这些攻击并非不可感知,并且由于在投毒数据上留下了明显的伪影,很容易被人眼检查检测到。为了克服这些挑战,我们提出了毒术师(Venomancer),一种有效的、不可感知且允许按需定向的后门攻击方法。具体而言,不可感知性是通过使用视觉损失函数实现的,该函数使投毒数据在视觉上与原始数据无法区分。按需定向特性允许攻击者通过条件对抗训练选择任意的目标类别。此外,实验表明,该方法对最先进的防御措施(如范数裁剪、弱差分隐私、Krum 和 Multi-Krum)具有鲁棒性。源代码可在 https://anonymous.4open.science/r/Venomancer-3426 获取。