In federated learning (FL), accommodating clients' varied computational capacities poses a challenge, often limiting the participation of those with constrained resources in global model training. To address this issue, the concept of model heterogeneity through submodel extraction has emerged, offering a tailored solution that aligns the model's complexity with each client's computational capacity. In this work, we propose Federated Importance-Aware Submodel Extraction (FIARSE), a novel approach that dynamically adjusts submodels based on the importance of model parameters, thereby overcoming the limitations of previous static and dynamic submodel extraction methods. Compared to existing works, the proposed method offers a theoretical foundation for the submodel extraction and eliminates the need for additional information beyond the model parameters themselves to determine parameter importance, significantly reducing the overhead on clients. Extensive experiments are conducted on various datasets to showcase the superior performance of the proposed FIARSE.
翻译:在联邦学习(FL)中,适应客户端不同的计算能力是一个挑战,这常常限制了资源受限的客户端参与全局模型训练。为解决此问题,通过子模型提取实现模型异构的概念应运而生,它提供了一种定制化解决方案,使模型复杂度与每个客户端的计算能力相匹配。本文提出联邦重要性感知子模型提取(FIARSE),这是一种新颖的方法,它基于模型参数的重要性动态调整子模型,从而克服了先前静态和动态子模型提取方法的局限性。与现有工作相比,所提方法为子模型提取提供了理论基础,并且无需模型参数本身之外的额外信息来确定参数重要性,显著降低了客户端的开销。我们在多个数据集上进行了广泛的实验,以展示所提出的FIARSE的优越性能。