While FL is a widely popular distributed ML strategy that protects data privacy, time-varying wireless network parameters and heterogeneous system configurations of the wireless device pose significant challenges. Although the limited radio and computational resources of the network and the clients, respectively, are widely acknowledged, two critical yet often ignored aspects are (a) wireless devices can only dedicate a small chunk of their limited storage for the FL task and (b) new training samples may arrive in an online manner in many practical wireless applications. Therefore, we propose a new FL algorithm called OSAFL, specifically designed to learn tasks relevant to wireless applications under these practical considerations. Since it has long been proven that under extreme resource constraints, clients may perform an arbitrary number of local training steps, which may lead to client drift under statistically heterogeneous data distributions, we leverage normalized gradient similarities and exploit weighting clients' updates based on optimized scores that facilitate the convergence rate of the proposed OSAFL algorithm. Our extensive simulation results on two different tasks -- each with three different datasets -- with four popular ML models validate the effectiveness of OSAFL compared to six existing state-of-the-art FL baselines.
翻译:尽管联邦学习(FL)作为一种广泛应用的分布式机器学习策略能够保护数据隐私,但时变的无线网络参数与无线设备的异构系统配置仍构成重大挑战。虽然网络与客户端分别面临的有限无线电资源与计算资源已得到普遍认知,但两个关键却常被忽视的方面在于:(a)无线设备仅能为其有限的存储空间分配少量容量用于FL任务;(b)在许多实际无线应用中,新的训练样本可能以在线方式持续到达。为此,我们提出了一种名为OSAFL的新型FL算法,专门针对这些实际约束条件下学习无线应用相关任务而设计。由于已有研究证明,在极端资源约束下客户端可能执行任意次数的本地训练步骤,这可能导致在统计异构数据分布下产生客户端漂移问题,我们通过标准化梯度相似性度量,并基于优化评分对客户端更新进行加权处理,从而提升所提OSAFL算法的收敛速度。我们在两个不同任务(每个任务包含三个不同数据集)上使用四种主流机器学习模型开展的广泛仿真实验表明,相较于六种现有先进FL基线方法,OSAFL均展现出显著优势。