Federated learning (FL) is a collaborative machine learning approach that enables multiple clients to train models without sharing their private data. With the rise of deep learning, large-scale models have garnered significant attention due to their exceptional performance. However, a key challenge in FL is the limitation imposed by clients with constrained computational and communication resources, which hampers the deployment of these large models. The Mixture of Experts (MoE) architecture addresses this challenge with its sparse activation property, which reduces computational workload and communication demands during inference and updates. Additionally, MoE facilitates better personalization by allowing each expert to specialize in different subsets of the data distribution. To alleviate the communication burdens between the server and clients, we propose FedMoE-DA, a new FL model training framework that leverages the MoE architecture and incorporates a novel domain-aware, fine-grained aggregation strategy to enhance the robustness, personalizability, and communication efficiency simultaneously. Specifically, the correlation between both intra-client expert models and inter-client data heterogeneity is exploited. Moreover, we utilize peer-to-peer (P2P) communication between clients for selective expert model synchronization, thus significantly reducing the server-client transmissions. Experiments demonstrate that our FedMoE-DA achieves excellent performance while reducing the communication pressure on the server.
翻译:联邦学习(FL)是一种协作式机器学习方法,允许多个客户端在不共享其私有数据的情况下训练模型。随着深度学习的兴起,大规模模型因其卓越性能而受到广泛关注。然而,联邦学习中的一个关键挑战在于,计算和通信资源受限的客户端限制了这些大型模型的部署。专家混合(MoE)架构通过其稀疏激活特性应对这一挑战,该特性减少了推理和更新过程中的计算负载与通信需求。此外,MoE允许每个专家专注于数据分布的不同子集,从而促进了更好的个性化。为了减轻服务器与客户端之间的通信负担,我们提出了FedMoE-DA,一种新的联邦学习模型训练框架。该框架利用MoE架构,并引入了一种新颖的领域感知细粒度聚合策略,以同时增强模型的鲁棒性、个性化能力与通信效率。具体而言,该方法利用了客户端内部专家模型之间以及客户端间数据异质性之间的相关性。此外,我们利用客户端之间的点对点(P2P)通信进行选择性专家模型同步,从而显著减少了服务器与客户端之间的传输。实验表明,我们的FedMoE-DA在降低服务器通信压力的同时,实现了优异的性能。