While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from the communicated model parameters. In this work, we propose \underline{H}ierarchical \underline{F}ederated Learning with \underline{H}ierarchical \underline{D}ifferential \underline{P}rivacy ({\tt H$^2$FDP}), a DP-enhanced FL methodology for jointly optimizing privacy and performance in hierarchical networks. Building upon recent proposals for Hierarchical Differential Privacy (HDP), one of the key concepts of {\tt H$^2$FDP} is adapting DP noise injection at different layers of an established FL hierarchy -- edge devices, edge servers, and cloud servers -- according to the trust models within particular subnetworks. We conduct a comprehensive analysis of the convergence behavior of {\tt H$^2$FDP}, revealing conditions on parameter tuning under which the training process converges sublinearly to a finite stationarity gap that depends on the network hierarchy, trust model, and target privacy level. Leveraging these relationships, we develop an adaptive control algorithm for {\tt H$^2$FDP} that tunes properties of local model training to minimize communication energy, latency, and the stationarity gap while striving to maintain a sub-linear convergence rate and meet desired privacy criteria. Subsequent numerical evaluations demonstrate that {\tt H$^2$FDP} obtains substantial improvements in these metrics over baselines for different privacy budgets, and validate the impact of different system configurations.
翻译:尽管联邦学习(FL)消除了原始数据在网络中的传输,但其通信的模型参数仍然容易引发隐私泄露。本文提出带层级差分隐私的层级联邦学习({\tt H$^2$FDP}),这是一种专为层级网络联合优化隐私与性能的差分隐私增强型FL方法论。基于层级差分隐私(HDP)的最新研究成果,{\tt H$^2$FDP}的核心思想之一是根据特定子网络内的信任模型,在已建立的FL层级架构——边缘设备、边缘服务器和云端服务器——的不同层次中自适应注入差分隐私噪声。我们对{\tt H$^2$FDP}的收敛行为进行了全面分析,揭示了参数调优条件:当满足特定条件时,训练过程以次线性速率收敛于有限平稳间隙,该间隙取决于网络层级、信任模型及目标隐私水平。基于这些关联关系,我们开发了{\tt H$^2$FDP}的自适应控制算法,该算法通过调整局部模型训练参数,在保持次线性收敛速率并满足既定隐私准则的同时,最小化通信能耗、延迟和收敛平稳间隙。后续数值评估表明,{\tt H$^2$FDP}在不同隐私预算下相比基线方法在这些指标上取得了显著改进,并验证了不同系统配置的实际影响。