We introduce a locally differentially private (LDP) algorithm for online federated learning that employs temporally correlated noise to improve utility while preserving privacy. To address challenges posed by the correlated noise and local updates with streaming non-IID data, we develop a perturbed iterate analysis that controls the impact of the noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed for several classes of nonconvex loss functions. Subject to an $(\epsilon,\delta)$-LDP budget, we establish a dynamic regret bound that quantifies the impact of key parameters and the intensity of changes in the dynamic environment on the learning performance. Numerical experiments confirm the efficacy of the proposed algorithm.
翻译:本文提出了一种本地差分隐私在线联邦学习算法,该算法使用时域相关噪声在保护隐私的同时提升效用。针对相关噪声与流式非独立同分布数据本地更新带来的挑战,我们开发了一种扰动迭代分析方法以控制噪声对效用的影响。此外,我们证明了对于几类非凸损失函数,本地更新产生的漂移误差能够被有效控制。在满足$(\epsilon,\delta)$-本地差分隐私预算的条件下,我们建立了动态遗憾界,该界量化了关键参数及动态环境变化强度对学习性能的影响。数值实验验证了所提算法的有效性。