Federated Learning (FL) enables multiple nodes to collaboratively train a model without sharing raw data. However, FL systems are usually deployed in heterogeneous scenarios, where nodes differ in both data distributions and participation frequencies, which undermines the FL performance. To tackle the above issue, this paper proposes PMFL, a performance-enhanced model-contrastive federated learning framework using historical training information. Specifically, on the node side, we design a novel model-contrastive term into the node optimization objective by incorporating historical local models to capture stable contrastive points, thereby improving the consistency of model updates in heterogeneous data distributions. On the server side, we utilize the cumulative participation count of each node to adaptively adjust its aggregation weight, thereby correcting the bias in the global objective caused by different node participation frequencies. Furthermore, the updated global model incorporates historical global models to reduce its fluctuations in performance between adjacent rounds. Extensive experiments demonstrate that PMFL achieves superior performance compared with existing FL methods in heterogeneous scenarios.
翻译:联邦学习(FL)允许多个节点在不共享原始数据的情况下协作训练模型。然而,FL系统通常部署在异构场景中,其中节点在数据分布和参与频率上均存在差异,这损害了FL的性能。为解决上述问题,本文提出PMFL,一种利用历史训练信息的性能增强型模型对比联邦学习框架。具体而言,在节点侧,我们通过引入历史本地模型以捕捉稳定的对比点,在节点优化目标中设计了一种新颖的模型对比项,从而提高了模型在异构数据分布下更新的一致性。在服务器侧,我们利用每个节点的累积参与次数自适应地调整其聚合权重,从而校正由不同节点参与频率引起的全局目标偏差。此外,更新后的全局模型融合了历史全局模型,以减少其在相邻轮次间的性能波动。大量实验表明,与现有FL方法相比,PMFL在异构场景中实现了更优的性能。