Federated learning (FL) is a privacy preserving machine learning paradigm designed to collaboratively learn a global model without data leakage. Specifically, in a typical FL system, the central server solely functions as an coordinator to iteratively aggregate the collected local models trained by each client, potentially introducing single-point transmission bottleneck and security threats. To mitigate this issue, decentralized federated learning (DFL) has been proposed, where all participating clients engage in peer-to-peer communication without a central server. Nonetheless, DFL still suffers from training degradation as FL does due to the non-independent and identically distributed (non-IID) nature of client data. And incorporating personalization layers into DFL may be the most effective solutions to alleviate the side effects caused by non-IID data. Therefore, in this paper, we propose a novel unit representation aided personalized decentralized federated learning framework, named UA-PDFL, to deal with the non-IID challenge in DFL. By adaptively adjusting the level of personalization layers through the guidance of the unit representation, UA-PDFL is able to address the varying degrees of data skew. Based on this scheme, client-wise dropout and layer-wise personalization are proposed to further enhance the learning performance of DFL. Extensive experiments empirically prove the effectiveness of our proposed method.
翻译:联邦学习(Federated Learning, FL)是一种旨在无数据泄露情况下协作学习全局模型的隐私保护机器学习范式。具体而言,在典型的FL系统中,中央服务器仅作为协调者,迭代聚合各客户端训练后收集的本地模型,这可能引入单点传输瓶颈与安全威胁。为缓解此问题,去中心化联邦学习(Decentralized Federated Learning, DFL)被提出,其中所有参与客户端在无中央服务器的情况下进行点对点通信。然而,由于客户端数据的非独立同分布(non-IID)特性,DFL与FL一样仍面临训练性能下降的问题。在DFL中引入个性化层可能是缓解非IID数据副作用的最有效解决方案。因此,本文提出一种新颖的单元表示辅助个性化去中心化联邦学习框架,命名为UA-PDFL,以应对DFL中的非IID挑战。通过单元表示指导自适应调整个性化层程度,UA-PDFL能够处理不同程度的数据偏斜。基于此方案,本文进一步提出客户端级丢弃与层级个性化策略,以进一步提升DFL的学习性能。大量实验从实证角度证明了所提方法的有效性。