Federated Learning enables diverse devices to collaboratively train a shared model while keeping training data locally stored, avoiding the need for centralized cloud storage. Despite existing privacy measures, concerns arise from potential reverse engineering of gradients, even with added noise, revealing private data. To address this, recent research emphasizes using encrypted model parameters during training. This paper introduces a novel federated learning algorithm, leveraging coded local gradients without encryption, exchanging coded proxies for model parameters, and injecting surplus noise for enhanced privacy. Two algorithm variants are presented, showcasing convergence and learning rates adaptable to coding schemes and raw data characteristics. Two encryption-free implementations with fixed and random coding matrices are provided, demonstrating promising simulation results from both federated optimization and machine learning perspectives.
翻译:联邦学习使得多样化的设备能够协同训练一个共享模型,同时将训练数据保留在本地存储,避免了集中式云存储的需求。尽管存在现有的隐私保护措施,但梯度的潜在逆向工程(即使添加了噪声)仍可能暴露私有数据,从而引发担忧。为解决这一问题,近期研究强调在训练过程中使用加密的模型参数。本文提出了一种新颖的联邦学习算法,该算法利用未加密的编码局部梯度,交换模型参数的编码代理,并通过注入额外噪声来增强隐私保护。文中提出了两种算法变体,展示了其收敛性和学习率可适应编码方案及原始数据特性。提供了两种基于固定和随机编码矩阵的无加密实现,并从联邦优化和机器学习的角度展示了具有前景的仿真结果。