Achieving differential privacy (DP) guarantees in fully decentralized machine learning is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. We present a framework for DP analysis of decentralized gossip-based averaging algorithms with additive node-level noise, from arbitrary views of nodes in a graph. We present an analytical framework based on a linear systems formulation that accurately characterizes privacy leakage between nodes. Our main contribution is showing that the DP guarantees are those of a Gaussian mechanism, where the growth of the squared sensitivity is asymptotically $O(T)$, where $T$ is the number of training rounds, similarly as in the case of central aggregation. As an application of the sensitivity analysis, we show that the excess risk of decentralized private learning for strongly convex losses is asymptotically similar as in centralized private learning.
翻译:在完全去中心化的机器学习中实现差分隐私(DP)保证具有挑战性,这源于中心聚合器的缺失以及节点间不同的信任假设。我们提出了一个框架,用于分析具有加性节点级噪声的去中心化基于闲谈的平均算法在图结构中任意节点视角下的差分隐私。我们提出了一个基于线性系统公式的分析框架,该框架能够精确刻画节点间的隐私泄露。我们的主要贡献在于证明了差分隐私保证等同于高斯机制的保证,其中平方敏感度的增长渐近为 $O(T)$,其中 $T$ 是训练轮数,这与中心化聚合的情况类似。作为敏感度分析的一个应用,我们证明了对于强凸损失函数,去中心化隐私学习的超额风险在渐近意义上与中心化隐私学习相似。