Federated Learning (FL) plays a critical role in distributed systems. In these systems, data privacy and confidentiality hold paramount importance, particularly within edge-based data processing systems such as IoT devices deployed in smart homes. FL emerges as a privacy-enforcing sub-domain of machine learning that enables model training on client devices, eliminating the necessity to share private data with a central server. While existing research has predominantly addressed challenges pertaining to data heterogeneity, there remains a current gap in addressing issues such as varying device capabilities and efficient communication. These unaddressed issues raise a number of implications in resource-constrained environments. In particular, the practical implementation of FL-based IoT or edge systems is extremely inefficient. In this paper, we propose "Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (REFT)," a novel approach specifically devised to address these challenges in resource-limited devices. Our proposed method uses Variable Pruning to optimize resource utilization by adapting pruning strategies to the computational capabilities of each client. Furthermore, our proposed REFT technique employs knowledge distillation to minimize the need for continuous bidirectional client-server communication. This achieves a significant reduction in communication bandwidth, thereby enhancing the overall resource efficiency. We conduct experiments for an image classification task, and the results demonstrate the effectiveness of our approach in resource-limited settings. Our technique not only preserves data privacy and performance standards but also accommodates heterogeneous model architectures, facilitating the participation of a broader array of diverse client devices in the training process, all while consuming minimal bandwidth.
翻译:联邦学习(FL)在分布式系统中发挥着关键作用,尤其在边缘数据处理系统(如智能家居中的物联网设备)中,数据隐私和安全具有至关重要的意义。FL作为机器学习的隐私增强子领域,能够在客户端设备上完成模型训练,无需将隐私数据共享至中央服务器。现有研究主要关注数据异构性挑战,但尚未充分解决设备能力差异和高效通信等问题。这些未解决的问题在资源受限环境中引发多重影响,尤其是基于FL的物联网或边缘系统的实际部署效率极低。本文提出"面向异构与资源受限环境的高效联邦训练框架(REFT)",这是一种针对资源受限设备上述挑战而设计的新方法。该方法采用可变剪枝技术,通过根据每个客户端的计算能力调整剪枝策略来优化资源利用率。此外,所提出的REFT技术利用知识蒸馏减少客户端与服务器之间持续双向通信的需求,从而显著降低通信带宽,全面提升资源效率。我们在图像分类任务上开展实验,结果表明该方法在资源受限场景下具有有效性。该技术不仅保持了数据隐私和性能标准,还能适应异构模型架构,支持更多样化的客户端设备参与训练流程,且仅需消耗最小化带宽。