Although Federated Learning (FL) is promising in knowledge sharing for heterogeneous Artificial Intelligence of Thing (AIoT) devices, their training performance and energy efficacy are severely restricted in practical battery-driven scenarios due to the ``wooden barrel effect'' caused by the mismatch between homogeneous model paradigms and heterogeneous device capability. As a result, due to various kinds of differences among devices, it is hard for existing FL methods to conduct training effectively in energy-constrained scenarios, such as battery constraints of devices. To tackle the above issues, we propose an energy-aware FL framework named DR-FL, which considers the energy constraints in both clients and heterogeneous deep learning models to enable energy-efficient FL. Unlike Vanilla FL, DR-FL adopts our proposed Muti-Agents Reinforcement Learning (MARL)-based dual-selection method, which allows participated devices to make contributions to the global model effectively and adaptively based on their computing capabilities and energy capacities in a MARL-based manner. Experiments conducted with various widely recognized datasets demonstrate that DR-FL has the capability to optimize the exchange of knowledge among diverse models in large-scale AIoT systems while adhering to energy limitations. Additionally, it improves the performance of each individual heterogeneous device's model.
翻译:尽管联邦学习(FL)在异构人工智能物联网(AIoT)设备的知识共享方面前景广阔,但在实际电池驱动的场景中,由于同质化模型范式与异构设备能力不匹配所导致的“木桶效应”,其训练性能和能源效率受到严重限制。因此,由于设备间存在各种差异,现有FL方法难以在能源受限的场景(例如设备的电池限制)中有效进行训练。为解决上述问题,我们提出了一种名为DR-FL的能源感知FL框架,该框架同时考虑了客户端和异构深度学习模型的能源约束,以实现高能效的联邦学习。与传统的FL不同,DR-FL采用我们提出的基于多智能体强化学习(MARL)的双重选择方法,该方法允许参与设备以基于MARL的方式,根据其计算能力和能源容量,有效且自适应地为全局模型做出贡献。在多个广泛认可的数据集上进行的实验表明,DR-FL能够在大规模AIoT系统中优化不同模型间的知识交换,同时遵守能源限制。此外,它还提升了每个异构设备上个体模型的性能。