Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients while keeping the data decentralized. Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies, particularly Function-as-a-Service (FaaS) for FL, can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders. However, existing serverless FL systems implicitly assume a uniform global model architecture across all participating clients during training. This assumption fails to address fundamental challenges in practical FL due to the resource and statistical data heterogeneity among FL clients. To address these challenges and enable heterogeneous client models in serverless FL, we utilize Knowledge Distillation (KD) in this paper. Towards this, we propose novel optimized serverless workflows for two popular conventional federated KD techniques, i.e., FedMD and FedDF. We implement these workflows by introducing several extensions to an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate the two strategies on multiple datasets across varying levels of client data heterogeneity using heterogeneous client models with respect to accuracy, fine-grained training times, and costs. Results from our experiments demonstrate that serverless FedDF is more robust to extreme non-IID data distributions, is faster, and leads to lower costs than serverless FedMD. In addition, compared to the original implementation, our optimizations for particular steps in FedMD and FedDF lead to an average speedup of 3.5x and 1.76x across all datasets.
翻译:联邦学习(FL)是一种新兴的机器学习范式,能够在保持数据去中心化的同时,通过分布式客户端协作训练共享全局模型。近期关于设计高效FL系统的研究表明,利用无服务器计算技术(特别是函数即服务FaaS)可提升资源效率、降低训练成本,并减轻数据持有者复杂的基础设施管理负担。然而,现有无服务器FL系统默认所有参与客户端在训练过程中采用统一的全局模型架构。这一假设未能解决实际FL中因客户端资源与统计数据的异构性带来的根本性挑战。为应对这些挑战并在无服务器FL中启用异构客户端模型,本文采用知识蒸馏(KD)技术。为此,我们针对两种流行的传统联邦知识蒸馏方法(即FedMD和FedDF)提出了优化的无服务器工作流。通过对开源无服务器FL系统FedLess进行多项扩展,我们实现了这些工作流。此外,我们使用异构客户端模型,在多个数据集上针对不同客户端数据异构程度,从准确率、细粒度训练时间和成本三个维度综合评估了这两种策略。实验结果表明,相较于无服务器FedMD,无服务器FedDF对极端非独立同分布(non-IID)数据分布具有更强的鲁棒性,且训练速度更快、成本更低。同时,与原始实现相比,我们对FedMD和FedDF特定步骤的优化在所有数据集上分别实现了平均3.5倍和1.76倍的加速。