Secure aggregation protocols ensure the privacy of users' data in federated learning by preventing the disclosure of local gradients. Many existing protocols impose significant communication and computational burdens on participants and may not efficiently handle the large update vectors typical of machine learning models. Correspondingly, we present e-SeaFL, an efficient verifiable secure aggregation protocol taking only one communication round during the aggregation phase. e-SeaFL allows the aggregation server to generate proof of honest aggregation to participants via authenticated homomorphic vector commitments. Our core idea is the use of assisting nodes to help the aggregation server, under similar trust assumptions existing works place upon the participating users. Our experiments show that the user enjoys an order of magnitude efficiency improvement over the state-of-the-art (IEEE S\&P 2023) for large gradient vectors with thousands of parameters. Our open-source implementation is available at https://github.com/vt-asaplab/e-SeaFL.
翻译:安全聚合协议通过防止本地梯度泄露来保障联邦学习中用户数据的隐私性。现有许多协议对参与者施加了显著的通信与计算负担,且难以高效处理机器学习模型中典型的大规模更新向量。为此,我们提出e-SeaFL——一种在聚合阶段仅需单轮通信的高效可验证安全聚合协议。e-SeaFL允许聚合服务器通过认证同态向量承诺向参与者生成诚实聚合证明。我们的核心思想是在现有研究对参与用户设定的相似信任假设下,引入辅助节点协助聚合服务器。实验表明,对于包含数千参数的大规模梯度向量,用户端效率较现有最优方案(IEEE S&P 2023)实现了数量级提升。开源实现详见https://github.com/vt-asaplab/e-SeaFL。