We propose clustered federated multitask learning to address statistical challenges in non-independent and identically distributed data across clients. Our approach tackles complexities in hierarchical wireless networks by clustering clients based on data distribution similarities and assigning specialized models to each cluster. These complexities include slower convergence and mismatched model allocation due to hierarchical model aggregation and client selection. The proposed framework features a two-phase client selection and a two-level model aggregation scheme. It ensures fairness and effective participation using greedy and round-robin methods. Our approach significantly enhances convergence speed, reduces training time, and decreases energy consumption by up to 60%, ensuring clients receive models tailored to their specific data needs.
翻译:我们提出聚类联邦多任务学习,以应对客户端间非独立同分布数据带来的统计挑战。该方法通过基于数据分布相似性对客户端进行聚类并为每个聚类分配专用模型,解决了分层无线网络中的复杂性。这些复杂性包括因分层模型聚合和客户端选择导致的收敛速度减慢及模型分配不匹配。所提出的框架采用两阶段客户端选择与两级模型聚合方案,通过贪心与轮询方法确保公平性与有效参与。我们的方法显著提升了收敛速度,减少了训练时间,并将能耗降低高达60%,确保客户端获得适应其特定数据需求的定制化模型。