Extreme resource constraints make large-scale machine learning (ML) with distributed clients challenging in wireless networks. On the one hand, large-scale ML requires massive information exchange between clients and server(s). On the other hand, these clients have limited battery and computation powers that are often dedicated to operational computations. Split federated learning (SFL) is emerging as a potential solution to mitigate these challenges, by splitting the ML model into client-side and server-side model blocks, where only the client-side block is trained on the client device. However, practical applications require personalized models that are suitable for the client's personal task. Motivated by this, we propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance. More specially, owing to the fact that regardless of the severity of the statistical data distributions across the clients, many of the features have similar attributes, we only train the body part of the federated learning (FL) model while keeping the (randomly initialized) classifier frozen during the training phase. We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model. Once the global model is trained, we fine-tune each client classifier to obtain the personalized models. Our empirical findings suggest that while the globally trained model with the untrained classifier performs quite similarly to other existing solutions, the fine-tuned models show significantly improved personalized performance.
翻译:极端资源限制使得分布式客户端的大规模机器学习在无线网络中面临挑战。一方面,大规模机器学习需要在客户端与服务器之间进行海量信息交换;另一方面,这些客户端通常具有有限的电池容量和计算能力,且这些资源常被用于运行计算。分割联邦学习作为一种潜在的解决方案正在兴起,它通过将机器学习模型分割为客户端模型块和服务器端模型块来缓解这些挑战,其中仅客户端模型块在客户端设备上进行训练。然而,实际应用需要适合客户端个性化任务的定制化模型。受此启发,我们提出了一种专门设计用于实现更优个性化性能的个性化分层分割联邦学习算法。具体而言,考虑到无论客户端间统计数据分布的差异程度如何,许多特征都具有相似属性,我们在训练阶段仅训练联邦学习模型的主体部分,同时保持(随机初始化的)分类器参数冻结。我们首先进行了广泛的理论分析,以理解模型分割和分层模型聚合对全局模型的影响。在全局模型训练完成后,我们对每个客户端分类器进行微调以获得个性化模型。我们的实证研究表明,虽然采用未训练分类器的全局训练模型与其他现有解决方案表现相似,但经过微调的模型展现出显著提升的个性化性能。