There are two categories of methods in Federated Learning (FL) for joint training across multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) sequential FL (SFL), where clients train models in a sequential manner. In contrast to that of PFL, the convergence theory of SFL on heterogeneous data is still lacking. In this paper, we establish the convergence guarantees of SFL for strongly/general/non-convex objectives on heterogeneous data. The convergence guarantees of SFL are better than that of PFL on heterogeneous data with both full and partial client participation. Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
翻译:联邦学习(FL)中存在两类跨客户端联合训练的方法:i)并行FL(PFL),其中客户端以并行方式训练模型;ii)顺序FL(SFL),其中客户端以顺序方式训练模型。与PFL不同,SFL在异构数据上的收敛理论仍然缺乏。在本文中,我们在异构数据上建立了SFL对于强凸/一般凸/非凸目标的收敛保证。在全部客户端参与和部分客户端参与的情况下,SFL的收敛保证均优于PFL在异构数据上的表现。实验结果验证了这一反直觉的分析结果:在跨设备环境下的极端异构数据上,SFL的性能优于PFL。