Although Federated Learning (FL) is promising in knowledge sharing for heterogeneous Artificial Intelligence of Thing (AIoT) devices, their training performance and energy efficacy are severely restricted in practical battery-driven scenarios due to the ``wooden barrel effect'' caused by the mismatch between homogeneous model paradigms and heterogeneous device capability. As a result, due to various kinds of differences among devices, it is hard for existing FL methods to conduct training effectively in energy-constrained scenarios, such as the battery constraints of devices. To tackle the above issues, we propose an energy-aware FL framework named DR-FL, which considers the energy constraints in both clients and heterogeneous deep learning models to enable energy-efficient FL. Unlike Vanilla FL, DR-FL adopts our proposed Muti-Agents Reinforcement Learning (MARL)-based dual-selection method, which allows participated devices to make contributions to the global model effectively and adaptively based on their computing capabilities and energy capacities in a MARL-based manner. Experiments on various well-known datasets show that DR-FL can not only maximise knowledge sharing among heterogeneous models under the energy constraint of large-scale AIoT systems but also improve the model performance of each involved heterogeneous device.
翻译:尽管联邦学习在异构人工智能物联网(AIoT)设备的知识共享方面具有广阔前景,但在实际电池驱动场景中,由于同质模型范式与异构设备能力不匹配导致的"木桶效应",其训练性能和能效受到严重制约。因此,面对设备间的各类差异,现有联邦学习方法难以在能量受限场景(如设备的电池限制)下有效开展训练。为解决上述问题,我们提出了一种名为DR-FL的能量感知联邦学习框架,该框架同时考虑了客户端与异构深度学习模型的能量约束,以实现节能型联邦学习。与标准联邦学习不同,DR-FL采用我们提出的基于多智能体强化学习(MARL)的双选方法,该方法允许参与设备基于其计算能力和能量容量,以MARL方式自适应地高效贡献于全局模型。在多个知名数据集上的实验表明,DR-FL不仅能在大规模AIoT系统的能量约束下最大化异构模型间的知识共享,还能提升每个参与异构设备的模型性能。