While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from the communicated model parameters. In this work, we propose Multi-Tier Federated Learning with Multi-Tier Differential Privacy (M^2FDP), a DP-enhanced FL methodology for jointly optimizing privacy and performance in hierarchical networks. One of the key concepts of M^2FDP is to extend the concept of HDP towards Multi-Tier Differential Privacy (MDP), while also adapting DP noise injection at different layers of an established FL hierarchy -- edge devices, edge servers, and cloud servers -- according to the trust models within particular subnetworks. We conduct a comprehensive analysis of the convergence behavior of M^2FDP, revealing conditions on parameter tuning under which the training process converges sublinearly to a finite stationarity gap that depends on the network hierarchy, trust model, and target privacy level. Subsequent numerical evaluations demonstrate that M^2FDP obtains substantial improvements in these metrics over baselines for different privacy budgets, and validate the impact of different system configurations.
翻译:尽管联邦学习(FL)消除了原始数据在网络中的传输,其通信的模型参数仍易遭受隐私泄露。本文提出多层级差分隐私多层级联邦学习(M^2FDP),这是一种差分隐私增强的联邦学习方法,用于在分层网络中联合优化隐私与性能。M^2FDP的核心概念之一是将分层差分隐私(HDP)扩展至多层级差分隐私(MDP),同时根据特定子网络内的信任模型,在已建立的联邦学习层级——边缘设备、边缘服务器和云服务器——的不同层级上适配差分隐私噪声注入。我们对M^2FDP的收敛行为进行了全面分析,揭示了参数调优的条件,在此条件下训练过程次线性收敛至一个有限的平稳性间隙,该间隙取决于网络层级、信任模型和目标隐私水平。随后的数值评估表明,对于不同的隐私预算,M^2FDP在这些指标上相较于基线方法获得了显著提升,并验证了不同系统配置的影响。