Federated learning (FL) enables organizations to collaboratively train models without sharing their datasets. Despite this advantage, recent studies show that both client updates and the global model can leak private information, limiting adoption in sensitive domains such as healthcare. Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission. However, existing LDP methods were designed for centralized training and introduce challenges in FL, including high resource demands that can cause client dropouts and the lack of reliable privacy guarantees under asynchronous participation. These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and GDPR. To address them, we propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees by accounting for intermittent participation.
翻译:联邦学习(FL)使得各机构能够在无需共享其数据集的情况下协作训练模型。尽管具备这一优势,近期研究表明,客户端更新和全局模型均可能泄露私有信息,这限制了其在医疗等敏感领域的应用。本地差分隐私(LDP)通过让每个参与者在传输前对更新进行隐私化处理,提供了强有力的保护。然而,现有的LDP方法是为集中式训练设计的,在联邦学习中引入了诸多挑战,包括可能导致客户端退出的高资源需求,以及在异步参与下缺乏可靠的隐私保证。这些问题损害了模型的泛化能力、公平性以及对HIPAA和GDPR等法规的遵从性。为解决这些问题,我们提出了L-RDP,这是一种专为LDP设计的差分隐私方法,它通过确保恒定且更低的内存使用来减少客户端退出,并通过考虑间歇性参与来提供严格的单客户端隐私保证。