Federated Learning (FL) as a promising distributed machine learning paradigm has been widely adopted in Artificial Intelligence of Things (AIoT) applications. However, the efficiency and inference capability of FL is seriously limited due to the presence of stragglers and data imbalance across massive AIoT devices, respectively. To address the above challenges, we present a novel asynchronous FL approach named CaBaFL, which includes a hierarchical Cache-based aggregation mechanism and a feature Balance-guided device selection strategy. CaBaFL maintains multiple intermediate models simultaneously for local training. The hierarchical cache-based aggregation mechanism enables each intermediate model to be trained on multiple devices to align the training time and mitigate the straggler issue. In specific, each intermediate model is stored in a low-level cache for local training and when it is trained by sufficient local devices, it will be stored in a high-level cache for aggregation. To address the problem of imbalanced data, the feature balance-guided device selection strategy in CaBaFL adopts the activation distribution as a metric, which enables each intermediate model to be trained across devices with totally balanced data distributions before aggregation. Experimental results show that compared with the state-of-the-art FL methods, CaBaFL achieves up to 9.26X training acceleration and 19.71\% accuracy improvements.
翻译:联邦学习作为一种有前景的分布式机器学习范式,已在物联网人工智能应用中得到广泛采用。然而,由于海量AIoT设备中存在掉队者以及数据不平衡问题,联邦学习的效率与推理能力分别受到严重制约。为应对上述挑战,本文提出一种名为CaBaFL的新型异步联邦学习方法,该方法包含一种基于分层缓存的聚合机制与一种特征平衡引导的设备选择策略。CaBaFL同时维护多个中间模型进行本地训练。基于分层缓存的聚合机制使每个中间模型能在多个设备上进行训练,从而对齐训练时间并缓解掉队者问题。具体而言,每个中间模型首先存储在低级缓存中进行本地训练;当其被足够数量的本地设备训练后,将转移至高级缓存等待聚合。针对数据不平衡问题,CaBaFL中特征平衡引导的设备选择策略采用激活分布作为度量指标,确保每个中间模型在聚合前能在数据分布完全平衡的设备间进行训练。实验结果表明,与最先进的联邦学习方法相比,CaBaFL最高可实现9.26倍的训练加速与19.71%的精度提升。