Federated Learning (FL), as a privacy-preserving machine learning paradigm, trains a global model across devices without exposing local data. However, resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training. In this paper, we propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT). Firstly, we design a dynamic tiering strategy that dynamically partitions devices into different tiers based on their response times and assigns specific timeout thresholds to each tier to reduce single-round training time. Then, we propose a cross-tier device selection algorithm that selects devices that respond quickly and are conducive to model convergence to improve convergence efficiency and accuracy. Experimental results demonstrate that the proposed approach under wireless networks outperforms the baseline approach, with an average reduction of 54.7\% in convergence time and an average improvement of 1.83\% in convergence accuracy.
翻译:联邦学习作为一种隐私保护的机器学习范式,可在不暴露本地数据的情况下跨设备训练全局模型。然而,无线网络中的资源异构性和不可避免的掉队者严重影响了联邦学习训练的效率和精度。本文提出了一种新颖的动态跨层联邦学习框架。首先,我们设计了一种动态分层策略,根据设备的响应时间将其动态划分至不同层级,并为每个层级分配特定的超时阈值,以减少单轮训练时间。随后,我们提出了一种跨层设备选择算法,该算法选择响应迅速且有利于模型收敛的设备,以提高收敛效率和精度。实验结果表明,所提方法在无线网络环境下优于基线方法,收敛时间平均减少54.7%,收敛精度平均提升1.83%。