Federated Learning (FL) paradigms enable large numbers of clients to collaboratively train Machine Learning models on private data. However, due to their multi-party nature, traditional FL schemes are left vulnerable to Byzantine attacks that attempt to hurt model performance by injecting malicious backdoors. A wide variety of prevention methods have been proposed to protect frameworks from such attacks. This paper provides a exhaustive and updated taxonomy of existing methods and frameworks, before zooming in and conducting an in-depth analysis of the strengths and weaknesses of the Robustness of Federated Learning (RoFL) protocol. From there, we propose two novel Sybil-based attacks that take advantage of vulnerabilities in RoFL. Finally, we conclude with comprehensive proposals for future testing, describe and detail implementation of the proposed attacks, and offer direction for improvements in the RoFL protocol as well as Byzantine-robust frameworks as a whole.
翻译:联邦学习(FL)范式使得大量客户端能够在私有数据上协作训练机器学习模型。然而,由于其多方参与的特性,传统FL方案容易受到拜占庭攻击,此类攻击试图通过注入恶意后门来损害模型性能。目前已提出多种防护方法来保护框架免受此类攻击。本文首先对现有方法与框架进行了详尽且更新的分类梳理,随后聚焦并深入分析了联邦学习鲁棒性(RoFL)协议的优缺点。在此基础上,我们提出了两种新型的基于Sybil的攻击方法,这些方法利用了RoFL协议中的漏洞。最后,我们提出了未来测试的全面建议,详细描述并实现了所提出的攻击方案,并为RoFL协议乃至整个拜占庭鲁棒框架的改进指明了方向。