Federated learning (FL) has rapidly become a compelling paradigm that enables multiple clients to jointly train a model by sharing only gradient updates for aggregation, without revealing their local private data. In order to protect the gradient updates which could also be privacy-sensitive, there has been a line of work studying local differential privacy (LDP) mechanisms to provide a formal privacy guarantee. With LDP mechanisms, clients locally perturb their gradient updates before sharing them out for aggregation. However, such approaches are known for greatly degrading the model utility, due to heavy noise addition. To enable a better privacy-utility tradeoff, a recently emerging trend is to apply the shuffle model of DP in FL, which relies on an intermediate shuffling operation on the perturbed gradient updates to achieve privacy amplification. Following this trend, in this paper, we present Camel, a new communication-efficient and maliciously secure FL framework in the shuffle model of DP. Camel first departs from existing works by ambitiously supporting integrity check for the shuffle computation, achieving security against malicious adversary. Specifically, Camel builds on the trending cryptographic primitive of secret-shared shuffle, with custom techniques we develop for optimizing system-wide communication efficiency, and for lightweight integrity checks to harden the security of server-side computation. In addition, we also derive a significantly tighter bound on the privacy loss through analyzing the Renyi differential privacy (RDP) of the overall FL process. Extensive experiments demonstrate that Camel achieves better privacy-utility trade-offs than the state-of-the-art work, with promising performance.
翻译:联邦学习(FL)已迅速成为一种引人注目的范式,它允许多个客户端仅通过共享梯度更新进行聚合来联合训练模型,而无需暴露其本地私有数据。为了保护同样可能具有隐私敏感性的梯度更新,一系列研究工作探索了本地差分隐私(LDP)机制以提供形式化的隐私保证。在LDP机制下,客户端在将梯度更新共享出去进行聚合之前,会先在本地对其进行扰动。然而,由于需要添加大量噪声,这类方法通常会导致模型效用显著下降。为了实现更好的隐私-效用权衡,一个新兴趋势是在FL中应用差分隐私的混洗模型,该模型通过对扰动后的梯度更新进行中间混洗操作来实现隐私放大。顺应这一趋势,本文提出了Camel,一个在差分隐私混洗模型下通信高效且恶意安全的新型联邦学习框架。Camel首先与现有工作不同,它雄心勃勃地支持对混洗计算的完整性检查,从而实现了针对恶意敌手的安全性。具体而言,Camel建立在当前流行的秘密共享混洗密码学原语之上,并采用了我们开发的定制技术来优化系统范围的通信效率,以及通过轻量级完整性检查来增强服务器端计算的安全性。此外,我们还通过分析整个FL过程的Renyi差分隐私(RDP),推导出了一个显著更紧的隐私损失界。大量实验表明,Camel相比最先进的工作实现了更好的隐私-效用权衡,并展现出良好的性能。