While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from the communicated model parameters. In this work, we formalize Differentially Private Hierarchical Federated Learning (DP-HFL), a DP-enhanced FL methodology that seeks to improve the privacy-utility tradeoff inherent in FL. Building upon recent proposals for Hierarchical Differential Privacy (HDP), one of the key concepts of DP-HFL is adapting DP noise injection at different layers of an established FL hierarchy -- edge devices, edge servers, and cloud servers -- according to the trust models within particular subnetworks. We conduct a comprehensive analysis of the convergence behavior of DP-HFL, revealing conditions on parameter tuning under which the model training process converges sublinearly to a stationarity gap, with this gap depending on the network hierarchy, trust model, and target privacy level. Subsequent numerical evaluations demonstrate that DP-HFL obtains substantial improvements in convergence speed over baselines for different privacy budgets, and validate the impact of network configuration on training.
翻译:尽管联邦学习(FL)消除了通过网络传输原始数据的需求,但其通信的模型参数仍存在隐私泄露风险。本研究形式化定义了差分隐私分层联邦学习(DP-HFL),这是一种通过增强差分隐私以优化FL中隐私-效用权衡的方法。基于近期提出的分层差分隐私(HDP)框架,DP-HFL的核心思想是根据特定子网络内的信任模型,在已建立的联邦学习层级结构(边缘设备、边缘服务器、云服务器)的不同层次上调整差分隐私噪声注入。我们对DP-HFL的收敛行为进行了全面分析,揭示了模型训练过程以次线性速度收敛至稳态间隙的参数调优条件——该间隙取决于网络层级结构、信任模型和目标隐私水平。后续数值评估表明,在不同隐私预算下,DP-HFL在收敛速度上相较于基线方法获得了显著提升,并验证了网络配置对训练过程的影响。