Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private training data. In conventional FL, the system follows the server-assisted architecture (server-assisted FL), where the training process is coordinated by a central server. However, the server-assisted FL framework suffers from poor scalability due to a communication bottleneck at the server, and trust dependency issues. To address challenges, decentralized federated learning (DFL) architecture has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner. However, due to its fully decentralized nature, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients. To date, only a limited number of Byzantine-robust DFL methods have been proposed, most of which are either communication-inefficient or remain vulnerable to advanced poisoning attacks. In this paper, we propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL. In BALANCE, each client leverages its own local model as a similarity reference to determine if the received model is malicious or benign. We establish the theoretical convergence guarantee for BALANCE under poisoning attacks in both strongly convex and non-convex settings. Furthermore, the convergence rate of BALANCE under poisoning attacks matches those of the state-of-the-art counterparts in Byzantine-free settings. Extensive experiments also demonstrate that BALANCE outperforms existing DFL methods and effectively defends against poisoning attacks.
翻译:联邦学习(FL)使得多个客户端能够在无需公开其私有训练数据的情况下协作训练机器学习模型。在传统的联邦学习中,系统采用服务器辅助架构(服务器辅助FL),其中训练过程由中央服务器协调。然而,服务器辅助FL框架由于服务器处的通信瓶颈以及信任依赖问题,存在可扩展性差的缺陷。为解决这些挑战,分散式联邦学习(DFL)架构被提出,允许客户端以无服务器和对等的方式协作训练模型。然而,由于其完全分散的特性,DFL极易受到投毒攻击,恶意客户端可能通过向相邻客户端发送精心构造的本地模型来操纵系统。迄今为止,仅提出了有限数量的拜占庭鲁棒DFL方法,其中大多数要么通信效率低下,要么仍易受高级投毒攻击。在本文中,我们提出了一种名为BALANCE(通过分散化中的局部相似性实现拜占庭鲁棒平均)的新算法,以防御DFL中的投毒攻击。在BALANCE中,每个客户端利用其自身的本地模型作为相似性参考,以判断接收到的模型是恶意的还是良性的。我们在强凸和非凸设置下,建立了BALANCE在投毒攻击下的理论收敛保证。此外,BALANCE在投毒攻击下的收敛速率与无拜占庭设置下的最先进方法相匹配。大量实验也表明,BALANCE优于现有的DFL方法,并能有效防御投毒攻击。