In this paper, we consider asynchronous federated learning (FL) over time-division multiple access (TDMA)-based communication networks. Considering TDMA for transmitting local updates can introduce significant delays to conventional synchronous FL, where all devices start local training from a common global model. In the proposed asynchronous FL approach, we partition devices into multiple TDMA groups, enabling simultaneous local computation and communication across different groups. This enhances time efficiency at the expense of staleness of local updates. We derive the relationship between the staleness of local updates and the size of the TDMA group in a training round. Moreover, our convergence analysis shows that although outdated local updates hinder appropriate global model updates, asynchronous FL over the TDMA channel converges even in the presence of data heterogeneity. Notably, the analysis identifies the impact of outdated local updates on convergence rate. Based on observations from our convergence rate, we refine asynchronous FL strategy by introducing an intentional delay in local training. This refinement accelerates the convergence by reducing the staleness of local updates. Our extensive simulation results demonstrate that asynchronous FL with the intentional delay can rapidly reduce global loss by lowering the staleness of local updates in resource-limited wireless communication networks.
翻译:本文研究基于时分多址通信网络的异步联邦学习。在传统同步联邦学习中,所有设备从统一的全局模型开始本地训练,而采用时分多址传输本地更新会引入显著延迟。在所提出的异步联邦学习方法中,我们将设备划分为多个时分多址组,使得不同组能够同时进行本地计算与通信。这种设计以本地更新的陈旧性为代价提升了时间效率。我们推导了训练轮次中本地更新陈旧度与时分多址组规模的关系。此外,收敛性分析表明:尽管过时的本地更新会阻碍全局模型的适当更新,但即使在数据异构条件下,基于时分多址信道的异步联邦学习仍能收敛。特别值得注意的是,该分析明确了过时本地更新对收敛速度的影响。基于对收敛速率的观测,我们通过引入本地训练的主动延迟来改进异步联邦学习策略。这种改进通过降低本地更新的陈旧度来加速收敛。大量仿真结果表明,在资源受限的无线通信网络中,采用主动延迟的异步联邦学习能通过降低本地更新陈旧度来快速减少全局损失。