Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively while keeping their datasets local and only exchanging the gradient or model updates with a coordinating server. Existing FL protocols are vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we focus on simultaneously achieving differential privacy (DP) and Byzantine robustness for cross-silo FL, based on the idea of learning from history. The robustness is achieved via client momentum, which averages the updates of each client over time, thus reducing the variance of the honest clients and exposing the small malicious perturbations of Byzantine clients that are undetectable in a single round but accumulate over time. In our initial solution DP-BREM, DP is achieved by adding noise to the aggregated momentum, and we account for the privacy cost from the momentum, which is different from the conventional DP-SGD that accounts for the privacy cost from the gradient. Since DP-BREM assumes a trusted server (who can obtain clients' local models or updates), we further develop the final solution called DP-BREM+, which achieves the same DP and robustness properties as DP-BREM without a trusted server by utilizing secure aggregation techniques, where DP noise is securely and jointly generated by the clients. Both theoretical analysis and experimental results demonstrate that our proposed protocols achieve better privacy-utility tradeoff and stronger Byzantine robustness than several baseline methods, under different DP budgets and attack settings.
翻译:联邦学习允许多个参与客户端在保持本地数据集私有的同时,通过仅与协调服务器交换梯度或模型更新来协作训练机器学习模型。现有联邦学习协议易受旨在破坏数据隐私和/或模型鲁棒性的攻击。近期提出的防御方案主要专注于确保隐私性或鲁棒性中的单一属性,而非两者兼顾。本文基于历史学习的思想,研究如何在跨机构联邦学习中同时实现差分隐私与拜占庭鲁棒性。鲁棒性通过客户端动量实现——该方法对每个客户端的更新进行时间维度上的平均,从而降低诚实客户端的方差,并暴露拜占庭客户端那些在单轮中不可检测但随时间累积的微小恶意扰动。在初始方案DP-BREM中,差分隐私通过向聚合动量添加噪声实现,且其隐私成本核算基于动量(这与传统基于梯度核算隐私成本的DP-SGD不同)。由于DP-BREM假设存在可信服务器(可获取客户端本地模型或更新),我们进一步提出最终方案DP-BREM+:通过采用安全聚合技术,在无需可信服务器的前提下实现与DP-BREM同等的隐私与鲁棒性保障,其中差分隐私噪声由客户端安全协同生成。理论分析与实验结果均表明,在不同差分隐私预算与攻击场景下,我们提出的协议相较于多种基线方法,能实现更优的隐私-效用权衡与更强的拜占庭鲁棒性。