Recently, Federated Learning (FL) has gained popularity for its privacy-preserving and collaborative learning capabilities. Personalized Federated Learning (PFL), building upon FL, aims to address the issue of statistical heterogeneity and achieve personalization. Personalized-head-based PFL is a common and effective PFL method that splits the model into a feature extractor and a head, where the feature extractor is collaboratively trained and shared, while the head is locally trained and not shared. However, retaining the head locally, although achieving personalization, prevents the model from learning global knowledge in the head, thus affecting the performance of the personalized model. To solve this problem, we propose a novel PFL method called Federated Learning with Aggregated Head (FedAH), which initializes the head with an Aggregated Head at each iteration. The key feature of FedAH is to perform element-level aggregation between the local model head and the global model head to introduce global information from the global model head. To evaluate the effectiveness of FedAH, we conduct extensive experiments on five benchmark datasets in the fields of computer vision and natural language processing. FedAH outperforms ten state-of-the-art FL methods in terms of test accuracy by 2.87%. Additionally, FedAH maintains its advantage even in scenarios where some clients drop out unexpectedly. Our code is open-accessed at https://github.com/heyuepeng/FedAH.
翻译:近年来,联邦学习因其隐私保护与协同学习能力而受到广泛关注。个性化联邦学习在联邦学习基础上,旨在解决统计异构性问题并实现个性化。基于个性化头部的PFL是一种常见且有效的PFL方法,其将模型拆分为特征提取器与头部:特征提取器通过协同训练共享,而头部则在本地训练且不共享。然而,头部完全本地化虽能实现个性化,却阻碍了模型在头部学习全局知识,从而影响个性化模型的性能。为解决该问题,我们提出一种新颖的PFL方法——聚合头部联邦学习,该方法在每次迭代时通过聚合头部初始化本地头部。FedAH的核心特性在于对本地模型头部与全局模型头部进行元素级聚合,从而引入全局模型头部的全局信息。为评估FedAH的有效性,我们在计算机视觉与自然语言处理领域的五个基准数据集上进行了大量实验。FedAH在测试准确率上超越十种前沿联邦学习方法达2.87%。此外,即使在部分客户端意外退出的场景下,FedAH仍能保持其优势。我们的代码已开源:https://github.com/heyuepeng/FedAH。