Clustered Federated Multi-task Learning (CFL) has emerged as a promising technique to address statistical challenges, particularly with non-independent and identically distributed (non-IID) data across users. However, existing CFL studies entirely rely on the impractical assumption that devices possess access to accurate ground-truth labels. This assumption becomes problematic in hierarchical wireless networks (HWNs), with vast unlabeled data and dual-level model aggregation, slowing convergence speeds, extending processing times, and increasing resource consumption. To this end, we propose Clustered Federated Semi-Supervised Learning (CFSL), a novel framework tailored for realistic scenarios in HWNs. We leverage specialized models from device clustering and present two prediction model schemes: the best-performing specialized model and the weighted-averaging ensemble model. The former assigns the most suitable specialized model to label unlabeled data, while the latter unifies specialized models to capture broader data distributions. CFSL introduces two novel prediction time schemes, split-based and stopping-based, for accurate labeling timing, and two device selection strategies, greedy and round-robin. Extensive testing validates CFSL's superiority in labeling/testing accuracy and resource efficiency, achieving up to 51% energy savings.
翻译:聚类联邦多任务学习已成为解决统计挑战(尤其是用户间非独立同分布数据)的一种有前景的技术。然而,现有CFL研究完全依赖于设备能够获取准确真实标签这一不切实际的假设。该假设在具有海量未标注数据和双层模型聚合的分层无线网络中会产生问题,导致收敛速度减缓、处理时间延长及资源消耗增加。为此,我们提出面向HWN现实场景的聚类联邦半监督学习框架。我们利用设备聚类生成的专用模型,并提出两种预测模型方案:性能最优的专用模型与加权平均集成模型。前者分配最合适的专用模型标注未标记数据,后者通过统合专用模型以捕捉更广泛的数据分布。CFSL创新性地提出基于分割与基于停止的两种预测时序方案以实现精确标注计时,并设计贪婪与轮询两种设备选择策略。大量实验验证了CFSL在标注/测试精度与资源效率方面的优越性,最高可实现51%的节能效果。