We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones. At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork. The embedding network is used to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork takes as input such descriptors and outputs the parameters of fully personalized client models. In combination, both networks constitute a learning algorithm that achieves state-of-the-art performance in several personalized federated learning benchmarks.
翻译:本文提出PeFLL——一种新型个性化联邦学习算法,其在以下三方面超越现有最优方法:1)能生成更精确的模型,尤其在低数据量场景下,不仅适用于训练阶段出现的客户端,也适用于未来可能出现的任何新客户端;2)通过为未来客户端提供无需额外微调或优化的即用型个性化模型,显著减少客户端计算量与客户端-服务器通信开销;3)具备理论保证,可证明从已观测客户端到未来客户端的泛化能力。PeFLL的核心在于采用元学习方法联合训练嵌入网络与超网络。嵌入网络将客户端映射至潜在描述符空间,其映射方式能反映客户端间的相似性。超网络以该描述符为输入,输出完全个性化客户端模型的参数。两个网络协同构成的学习算法在多个个性化联邦学习基准测试中达到了最先进的性能水平。