Federated Learning(FL) is a privacy-preserving machine learning paradigm where a global model is trained in-situ across a large number of distributed edge devices. These systems are often comprised of millions of user devices and only a subset of available devices can be used for training in each epoch. Designing a device selection strategy is challenging, given that devices are highly heterogeneous in both their system resources and training data. This heterogeneity makes device selection very crucial for timely model convergence and sufficient model accuracy. To tackle the FL client heterogeneity problem, various client selection algorithms have been developed, showing promising performance improvement in terms of model coverage and accuracy. In this work, we study the overhead of client selection algorithms in a large scale FL environment. Then we propose an efficient data distribution summary calculation algorithm to reduce the overhead in a real-world large scale FL environment. The evaluation shows that our proposed solution could achieve up to 30x reduction in data summary time, and up to 360x reduction in clustering time.
翻译:联邦学习(Federated Learning, FL)是一种隐私保护的机器学习范式,其全局模型在大量分布式边缘设备上进行原位训练。这些系统通常由数百万用户设备组成,且在每个训练周期中只能使用可用设备的一个子集。鉴于设备在系统资源和训练数据方面均存在高度异构性,设计设备选择策略具有挑战性。这种异构性使得设备选择对于模型的及时收敛和达到足够的模型精度至关重要。为应对联邦学习中的客户端异构性问题,研究人员已开发出多种客户端选择算法,在模型覆盖范围和准确性方面显示出有前景的性能提升。本研究探讨了大规模联邦学习环境中客户端选择算法的开销。随后,我们提出了一种高效的数据分布摘要计算算法,以降低现实世界大规模联邦学习环境中的开销。评估结果表明,我们提出的方案可将数据摘要计算时间最多减少30倍,并将聚类时间最多减少360倍。