Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data. However, adversaries can still infer individual information through inference attacks (e.g. differential attacks) on these training parameters. As a result, Differential Privacy (DP) has been widely used in FL to prevent such attacks. We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained. By theoretically analyzing the convergence, we can find the optimal number of local DPSGD iterations for clients between any two sequential global updates. Based on this, we design an algorithm of Differentially Private Federated Learning with Adaptive Local Iterations (ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10 datasets, and demonstrate significantly better performances than previous work in the resource-constraint scenario. Code is available at https://github.com/KnightWan/ALI-DPFL.
翻译:摘要:联邦学习(FL)是一种分布式机器学习技术,允许多个设备或组织通过共享训练参数而非原始数据来进行模型训练。然而,攻击者仍可通过推理攻击(例如差分攻击)从这些训练参数中推断出个体信息。因此,差分隐私(DP)被广泛应用于FL中以防范此类攻击。本文考虑资源受限场景下的差分隐私联邦学习,其中隐私预算和通信轮次均受到约束。通过理论收敛性分析,我们能够为客户端确定在两个连续全局更新之间的最优本地差分隐私随机梯度下降迭代次数。基于此,我们设计了一种具有自适应本地迭代的差分隐私联邦学习算法(ALI-DPFL)。我们在MNIST、FashionMNIST和Cifar10数据集上进行了实验,结果表明在资源受限场景下,本算法性能显著优于先前工作。代码已开源在https://github.com/KnightWan/ALI-DPFL。