Federated Learning (FL) has shown considerable promise in Computing Power Networks (CPNs) for privacy protection, efficient data utilization, and dynamic collaboration. Although it offers practical benefits, applying FL in CPNs continues to encounter a major obstacle, i.e., multi-task deployment. However, existing work mainly focuses on mitigating FL's computation and communication overhead of a single task while overlooking the computing resource wastage issue of heterogeneous devices across multiple tasks in FL under CPNs. To tackle this, we design FedAPTA, a federated multi-task learning framework in CPNs. FedAPTA alleviates computing resource wastage through the developed layer-wise model pruning technique, which reduces local model size while considering both data and device heterogeneity. To aggregate structurally heterogeneous local models of different tasks, we introduce a heterogeneous model recovery strategy and a task-aware model aggregation method that enables the aggregation through infilling local model architecture with the shared global model and clustering local models according to their specific tasks. We deploy FedAPTA on a realistic FL platform and benchmark it against nine SOTA FL methods. The experimental outcomes demonstrate that the proposed FedAPTA considerably outperforms the state-of-the-art FL methods by up to 4.23%. Our code is available at https://github.com/Zhenzovo/FedCPN.
翻译:联邦学习在算力网络中展现出在隐私保护、高效数据利用和动态协作方面的显著潜力。尽管具备实际优势,联邦学习在算力网络中的应用仍面临一个主要障碍,即多任务部署。然而,现有工作主要集中于减轻联邦学习中单一任务的计算与通信开销,却忽视了算力网络下联邦学习中跨多任务的异构设备计算资源浪费问题。为解决此问题,我们设计了FedAPTA,一种算力网络中的联邦多任务学习框架。FedAPTA通过开发的分层模型剪枝技术缓解计算资源浪费,该技术在考虑数据和设备异构性的同时减小了本地模型规模。为聚合不同任务的结构异构本地模型,我们引入了异构模型恢复策略和任务感知模型聚合方法,该方法通过用共享全局模型填充本地模型架构,并根据其特定任务对本地模型进行聚类来实现聚合。我们在一个真实的联邦学习平台上部署了FedAPTA,并将其与九种先进的联邦学习方法进行基准测试。实验结果表明,所提出的FedAPTA显著优于现有最先进的联邦学习方法,性能提升最高达4.23%。我们的代码可在 https://github.com/Zhenzovo/FedCPN 获取。