Federated Learning (FL) provides a privacy-preserving mechanism for distributed training of machine learning models on networked devices (e.g., mobile devices, IoT edge nodes). It enables Artificial Intelligence (AI) at the edge by creating models without sharing actual data across the network. Existing research typically focuses on generic aspects of non-IID data and heterogeneity in client's system characteristics, but they often neglect the issue of insufficient data for model development, which can arise from uneven class label distribution and highly variable data volumes across edge nodes. In this work, we propose FLIGAN, a novel approach to address the issue of data incompleteness in FL. First, we leverage Generative Adversarial Networks (GANs) to adeptly capture complex data distributions and generate synthetic data that closely resemble real-world data. Then, we use synthetic data to enhance the robustness and completeness of datasets across nodes. Our methodology adheres to FL's privacy requirements by generating synthetic data in a federated manner without sharing the actual data in the process. We incorporate techniques such as classwise sampling and node grouping, designed to improve the federated GAN's performance, enabling the creation of high-quality synthetic datasets and facilitating efficient FL training. Empirical results from our experiments demonstrate that FLIGAN significantly improves model accuracy, especially in scenarios with high class imbalances, achieving up to a 20% increase in model accuracy over traditional FL baselines.
翻译:联邦学习(FL)为网络设备(如移动设备、物联网边缘节点)上的机器学习模型分布式训练提供了一种隐私保护机制。它通过在不跨网络共享实际数据的情况下创建模型,实现了边缘人工智能(AI)。现有研究通常关注非独立同分布数据及客户端系统特征异质性等通用问题,但往往忽略了模型开发中数据不足的挑战——这种数据不足可能源于边缘节点间不均匀的类别标签分布和高度可变的数据量。本文提出FLIGAN这一新颖方法来解决FL中的数据不完整问题。首先,我们利用生成对抗网络(GAN)巧妙捕捉复杂数据分布,并生成与真实数据高度相似的合成数据;随后,通过合成数据增强各节点数据集的鲁棒性和完整性。我们的方法遵循FL的隐私要求,以联邦方式生成合成数据,过程中不共享实际数据。我们引入了类别级采样和节点分组等技术,旨在提升联邦GAN的性能,从而生成高质量的合成数据集并促进高效的FL训练。实验结果表明,FLIGAN显著提高了模型准确率,尤其在类别高度不平衡的场景中,相较于传统FL基线实现了高达20%的准确率提升。