Differentially private (DP) decentralized Federated Learning (FL) allows local users to collaborate without sharing their data with a central server. However, accurately quantifying the privacy budget of private FL algorithms is challenging due to the co-existence of complex algorithmic components such as decentralized communication and local updates. This paper addresses privacy accounting for two decentralized FL algorithms within the $f$-differential privacy ($f$-DP) framework. We develop two new $f$-DP-based accounting methods tailored to decentralized settings: Pairwise Network $f$-DP (PN-$f$-DP), which quantifies privacy leakage between user pairs under random-walk communication, and Secret-based $f$-Local DP (Sec-$f$-LDP), which supports structured noise injection via shared secrets. By combining tools from $f$-DP theory and Markov chain concentration, our accounting framework captures privacy amplification arising from sparse communication, local iterations, and correlated noise. Experiments on synthetic and real datasets demonstrate that our methods yield consistently tighter $(\epsilon,\delta)$ bounds and improved utility compared to R\'enyi DP-based approaches, illustrating the benefits of $f$-DP in decentralized privacy accounting.
翻译:差分隐私(DP)去中心化联邦学习(FL)允许本地用户在无需向中央服务器共享数据的情况下进行协作。然而,由于去中心化通信与本地更新等复杂算法组件的共存,准确量化私有FL算法的隐私预算是具有挑战性的。本文在$f$-差分隐私($f$-DP)框架下,针对两种去中心化FL算法提出了隐私核算方法。我们开发了两种专门适用于去中心化场景的、基于$f$-DP的新型核算方法:面向随机游走通信的成对网络$f$-DP(PN-$f$-DP),用于量化用户对之间的隐私泄露;以及基于秘密的$f$-本地DP(Sec-$f$-LDP),支持通过共享秘密实现结构化噪声注入。通过结合$f$-DP理论与马尔可夫链集中性工具,我们的核算框架能够捕捉由稀疏通信、本地迭代及相关噪声带来的隐私放大效应。在合成与真实数据集上的实验表明,与基于Rényi DP的方法相比,我们的方法能够持续获得更紧致的$(\epsilon,\delta)$边界并提升模型效用,从而验证了$f$-DP在去中心化隐私核算中的优势。