In federated learning (FL), profiling and verifying each client is inherently difficult, which introduces a significant security vulnerability: malicious clients, commonly referred to as Byzantines, can degrade the accuracy of the global model by submitting poisoned updates during training. To mitigate this, the aggregation process at the parameter server must be robust against such adversarial behaviour. Most existing defences approach the Byzantine problem from an outlier detection perspective, treating malicious updates as statistical anomalies and ignoring the internal structure of the trained neural network (NN). Motivated by this, this work highlights the potential of leveraging side information tied to the NN architecture to design stronger, more targeted attacks. In particular, inspired by insights from sparse NNs, we introduce a hybrid sparse Byzantine attack. The attack consists of two coordinated components: (i) A sparse attack component that selectively manipulates parameters with higher sensitivity in the NN, aiming to cause maximum disruption with minimal visibility; (ii) A slow-accumulating attack component that silently poisons parameters over multiple rounds to evade detection. Together, these components create a strong but imperceptible attack strategy that can bypass common defences. We evaluate the proposed attack through extensive simulations and demonstrate its effectiveness against eight state-of-the-art defence mechanisms.
翻译:在联邦学习(FL)中,对每个客户端进行画像和验证本质上具有困难,这引入了显著的安全漏洞:恶意客户端(通常称为拜占庭节点)可以通过在训练过程中提交被毒化的更新来降低全局模型的准确性。为缓解此问题,参数服务器处的聚合过程必须对此类对抗行为具有鲁棒性。大多数现有防御方法从异常值检测的角度处理拜占庭问题,将恶意更新视为统计异常,而忽略了所训练神经网络(NN)的内部结构。受此启发,本研究强调了利用与NN架构相关的侧信息来设计更强、更具针对性攻击的潜力。具体而言,受稀疏神经网络的见解启发,我们引入了一种混合稀疏拜占庭攻击。该攻击由两个协同组件构成:(i)稀疏攻击组件,其选择性地操纵神经网络中具有较高敏感度的参数,旨在以最小的可见性造成最大破坏;(ii)缓慢累积攻击组件,其在多轮训练中静默地毒化参数以规避检测。这两个组件共同构成了一种强大但难以察觉的攻击策略,能够绕过常见防御机制。我们通过大量仿真评估了所提出的攻击,并证明了其在对抗八种先进防御机制时的有效性。