High utility and rigorous data privacy are of the main goals of a federated learning (FL) system, which learns a model from the data distributed among some clients. The latter has been tried to achieve by using differential privacy in FL (DPFL). There is often heterogeneity in clients privacy requirements, and existing DPFL works either assume uniform privacy requirements for clients or are not applicable when server is not fully trusted (our setting). Furthermore, there is often heterogeneity in batch and/or dataset size of clients, which as shown, results in extra variation in the DP noise level across clients model updates. With these sources of heterogeneity, straightforward aggregation strategies, e.g., assigning clients aggregation weights proportional to their privacy parameters will lead to lower utility. We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates and reduces the noise-level in the aggregated model updates considerably. Robust-HDP improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server. Extensive experimental results on multiple datasets and our theoretical analysis confirm the effectiveness of Robust-HDP. Our code can be found here.
翻译:高实用性与严格的数据隐私保护是联邦学习(FL)系统的核心目标,该系统通过分布在多个客户端的数据进行模型训练。为实现隐私保护,现有研究尝试在联邦学习中引入差分隐私(DPFL)。客户端通常存在隐私需求的异质性,而现有DPFL工作要么假设客户端具有统一的隐私要求,要么在服务器不完全可信(本文设定)时无法适用。此外,客户端的批次大小和/或数据集规模往往也存在异质性,研究表明这会导致不同客户端模型更新的DP噪声水平产生额外差异。面对这些异质性来源,简单的聚合策略(例如按客户端隐私参数比例分配聚合权重)将导致模型效用下降。本文提出Robust-HDP算法,该算法能有效估计客户端模型更新的真实噪声水平,并显著降低聚合模型更新中的噪声水平。Robust-HDP在提升模型效用与收敛速度的同时,能够防范客户端可能恶意向服务器发送虚假隐私参数的行为。在多个数据集上的大量实验结果及理论分析证实了Robust-HDP的有效性。相关代码可通过此链接获取。