Differentially private federated learning (DP-FL) is a promising technique for collaborative model training while ensuring provable privacy for clients. However, optimizing the tradeoff between privacy and accuracy remains a critical challenge. To our best knowledge, we propose the first DP-FL framework (namely UDP-FL), which universally harmonizes any randomization mechanism (e.g., an optimal one) with the Gaussian Moments Accountant (viz. DP-SGD) to significantly boost accuracy and convergence. Specifically, UDP-FL demonstrates enhanced model performance by mitigating the reliance on Gaussian noise. The key mediator variable in this transformation is the R\'enyi Differential Privacy notion, which is carefully used to harmonize privacy budgets. We also propose an innovative method to theoretically analyze the convergence for DP-FL (including our UDP-FL ) based on mode connectivity analysis. Moreover, we evaluate our UDP-FL through extensive experiments benchmarked against state-of-the-art (SOTA) methods, demonstrating superior performance on both privacy guarantees and model performance. Notably, UDP-FL exhibits substantial resilience against different inference attacks, indicating a significant advance in safeguarding sensitive data in federated learning environments.
翻译:差分隐私联邦学习(DP-FL)是一种在确保客户端可证明隐私的前提下进行协同模型训练的前沿技术。然而,优化隐私与精度之间的权衡仍是一个关键挑战。据我们所知,我们提出了首个DP-FL框架(即UDP-FL),该框架通过高斯矩会计(即DP-SGD)通用协调任意随机化机制(例如最优机制),从而显著提升精度与收敛速度。具体而言,UDP-FL通过降低对高斯噪声的依赖来增强模型性能。该转换过程中的关键中介变量是Rényi差分隐私概念,其被精心用于协调隐私预算。我们还提出了一种基于模式连通性分析的理论方法,用于分析DP-FL(包括我们的UDP-FL)的收敛性。此外,我们通过大量实验对UDP-FL进行了评估,并以最先进(SOTA)方法为基准,结果表明其在隐私保证和模型性能方面均具有优越表现。值得注意的是,UDP-FL对不同推理攻击展现出显著的抵御能力,标志着联邦学习环境中敏感数据保护的重要进展。