This paper jointly considers privacy preservation and Byzantine-robustness in decentralized learning. In a decentralized network, honest-but-curious agents faithfully follow the prescribed algorithm, but expect to infer their neighbors' private data from messages received during the learning process, while dishonest-and-Byzantine agents disobey the prescribed algorithm, and deliberately disseminate wrong messages to their neighbors so as to bias the learning process. For this novel setting, we investigate a generic privacy-preserving and Byzantine-robust decentralized stochastic gradient descent (SGD) framework, in which Gaussian noise is injected to preserve privacy and robust aggregation rules are adopted to counteract Byzantine attacks. We analyze its learning error and privacy guarantee, discovering an essential tradeoff between privacy preservation and Byzantine-robustness in decentralized learning -- the learning error caused by defending against Byzantine attacks is exacerbated by the Gaussian noise added to preserve privacy. For a class of state-of-the-art robust aggregation rules, we give unified analysis of the "mixing abilities". Building upon this analysis, we reveal how the "mixing abilities" affect the tradeoff between privacy preservation and Byzantine-robustness. The theoretical results provide guidelines for achieving a favorable tradeoff with proper design of robust aggregation rules. Numerical experiments are conducted and corroborate our theoretical findings.
翻译:本文在去中心化学习中同时考量隐私保护与拜占庭鲁棒性问题。在去中心化网络中,诚实但好奇的节点会严格遵循既定算法,但试图从学习过程中接收到的消息推断其邻居的私有数据;而恶意拜占庭节点则违背既定算法,故意向邻居传播错误信息以干扰学习过程。针对这一新场景,我们研究了一种通用的隐私保护且拜占庭鲁棒的随机梯度下降框架,该框架通过注入高斯噪声实现隐私保护,并采用鲁棒聚合规则抵御拜占庭攻击。我们分析了其学习误差与隐私保障,揭示了去中心化学习中隐私保护与拜占庭鲁棒性之间的本质权衡——为抵御拜占庭攻击所产生的学习误差会因隐私保护所需的高斯噪声而加剧。针对一类先进的鲁棒聚合规则,我们对其"混合能力"进行了统一分析。基于此分析,我们阐明了"混合能力"如何影响隐私保护与拜占庭鲁棒性之间的权衡关系。理论结果为通过合理设计鲁棒聚合规则实现更优权衡提供了指导原则。数值实验验证了理论结论。