The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client. This paper pays particular attention to the issue of client-side model heterogeneity, a pervasive challenge in the practical implementation of FL that escalates its complexity. Assuming a scenario where each client possesses varied memory storage, processing capabilities and network bandwidth - a phenomenon referred to as system heterogeneity - there is a pressing need to customize a unique model for each client. In response to this, we present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning, tailored for model heterogeneity scenarios. Our proposed methodology can incorporate and adapt well-established techniques to its specific instances. We offer a theoretical interpretation of FedP3 and its locally differential-private variant, DP-FedP3, and theoretically validate their efficiencies.
翻译:联邦学习因其能够利用各客户端本地持有的隐私安全信息训练全局模型的独特能力,在近期研究中引起了广泛关注。本文特别关注客户端模型异构性问题——这是联邦学习实际部署中加剧其复杂性的普遍挑战。假设每个客户端具有不同的内存存储容量、处理能力和网络带宽(即系统异构性现象),迫切需要为每个客户端定制差异化模型。为此,我们提出高效且自适应的联邦框架FedP3,即联邦个性化与隐私友好型网络剪枝,专门针对模型异构场景设计。所提出的方法能够整合并适配成熟技术至其具体实例中。我们为FedP3及其本地差分隐私变体DP-FedP3提供了理论解释,并从理论上验证了其效率。