Hierarchical federated learning (HFL) is a promising distributed deep learning model training paradigm, but it has crucial security concerns arising from adversarial attacks. This research investigates and assesses the security of HFL using a novel methodology by focusing on its resilience against adversarial attacks inference-time and training-time. Through a series of extensive experiments across diverse datasets and attack scenarios, we uncover that HFL demonstrates robustness against untargeted training-time attacks due to its hierarchical structure. However, targeted attacks, particularly backdoor attacks, exploit this architecture, especially when malicious clients are positioned in the overlapping coverage areas of edge servers. Consequently, HFL shows a dual nature in its resilience, showcasing its capability to recover from attacks thanks to its hierarchical aggregation that strengthens its suitability for adversarial training, thereby reinforcing its resistance against inference-time attacks. These insights underscore the necessity for balanced security strategies in HFL systems, leveraging their inherent strengths while effectively mitigating vulnerabilities.
翻译:分层联邦学习(HFL)是一种前景广阔的分布式深度学习模型训练范式,但其面临由对抗性攻击引发的关键安防问题。本研究采用一种新颖的方法,通过聚焦于HFL在推理时和训练时对抗攻击的鲁棒性,对其安防性进行了调查与评估。通过在不同数据集和攻击场景下进行的一系列广泛实验,我们发现,得益于其分层结构,HFL在面对非目标性训练时攻击时表现出鲁棒性。然而,目标性攻击,特别是后门攻击,能够利用这种架构,尤其是在恶意客户端位于边缘服务器覆盖范围的重叠区域时。因此,HFL在其韧性上表现出双重性:一方面,其分层聚合机制增强了其对抗训练的适用性,从而提升了抵抗推理时攻击的能力,并展示了从攻击中恢复的潜力;另一方面,其架构也存在被特定攻击利用的弱点。这些见解强调了在HFL系统中采取平衡安防策略的必要性,即充分利用其固有优势,同时有效缓解其脆弱性。