Secure aggregation (SecAgg) is a commonly-used privacy-enhancing mechanism in federated learning, affording the server access only to the aggregate of model updates while safeguarding the confidentiality of individual updates. Despite widespread claims regarding SecAgg's privacy-preserving capabilities, a formal analysis of its privacy is lacking, making such presumptions unjustified. In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update. We design a simple attack wherein an adversarial server seeks to discern which update vector a client submitted, out of two possible ones, in a single training round of federated learning under SecAgg. By conducting privacy auditing, we assess the success probability of this attack and quantify the LDP guarantees provided by SecAgg. Our numerical results unveil that, contrary to prevailing claims, SecAgg offers weak privacy against membership inference attacks even in a single training round. Indeed, it is difficult to hide a local update by adding other independent local updates when the updates are of high dimension. Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection, in federated learning.
翻译:安全聚合(SecAgg)是联邦学习中常用的隐私增强机制,它使服务器仅能访问模型更新的聚合结果,同时保护个体更新的机密性。尽管关于SecAgg隐私保护能力的广泛声明普遍存在,但对其隐私性的形式化分析仍然缺乏,这使得此类假设缺乏依据。本文通过将SecAgg视为每个本地更新的本地差分隐私(LDP)机制,深入探讨其隐私影响。我们设计了一种简单攻击:在SecAgg下的单轮联邦学习训练中,恶意服务器试图从两个可能的更新向量中识别客户端提交了哪一个。通过隐私审计,我们评估了该攻击的成功概率,并量化了SecAgg提供的LDP保障。我们的数值结果表明,与普遍观点相反,即使在单轮训练中,SecAgg对成员推理攻击提供的隐私保护也较弱。事实上,当更新维度较高时,通过添加其他独立的本地更新来隐藏本地更新是困难的。我们的发现强调了在联邦学习中采用额外隐私增强机制(例如噪声注入)的必要性。