High utility and rigorous data privacy are of the main goals of a federated learning (FL) system, which learns a model from the data distributed among some clients. The latter has been tried to achieve by using differential privacy in FL (DPFL). There is often heterogeneity in clients privacy requirements, and existing DPFL works either assume uniform privacy requirements for clients or are not applicable when server is not fully trusted (our setting). Furthermore, there is often heterogeneity in batch and/or dataset size of clients, which as shown, results in extra variation in the DP noise level across clients model updates. With these sources of heterogeneity, straightforward aggregation strategies, e.g., assigning clients aggregation weights proportional to their privacy parameters will lead to lower utility. We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates and reduces the noise-level in the aggregated model updates considerably. Robust-HDP improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server. Extensive experimental results on multiple datasets and our theoretical analysis confirm the effectiveness of Robust-HDP. Our code can be found here.
翻译:高效用与严格的数据隐私是联邦学习(FL)系统的核心目标,该系统通过分布在多个客户端的数据进行模型训练。后者通常尝试通过在联邦学习中应用差分隐私(DPFL)来实现。客户端的隐私需求往往存在异质性,而现有的DPFL研究要么假设客户端具有统一的隐私要求,要么在服务器不完全可信(我们的设定)时无法适用。此外,客户端的批次大小和/或数据集大小也常存在异质性,研究表明这会导致客户端模型更新中DP噪声水平产生额外差异。在这些异质性来源下,简单的聚合策略(例如按客户端隐私参数比例分配聚合权重)将导致效用降低。我们提出了Robust-HDP算法,它能有效估计客户端模型更新中的真实噪声水平,并显著降低聚合模型更新中的噪声水平。Robust-HDP在提升效用和收敛速度的同时,能够防范客户端可能恶意向服务器发送虚假隐私参数的行为。在多个数据集上的大量实验结果及理论分析证实了Robust-HDP的有效性。我们的代码可在此处获取。