Federated learning (FL) shows great promise in large scale machine learning, but brings new risks in terms of privacy and security. We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users. The scheme builds on the preexisting non-private FLTrust scheme, which tolerates malicious users through trust scores (TS) that attenuate or amplify the users' gradients. The trust scores are based on the ReLU function, which we approximate by a polynomial. The distributed and privacy-preserving computation in ByITFL is designed using a combination of Lagrange coded computing, verifiable secret sharing and re-randomization steps. ByITFL is the first Byzantine resilient scheme for FL with full information-theoretic privacy.
翻译:联邦学习(FL)在大规模机器学习中展现出巨大潜力,但同时也带来了新的隐私和安全风险。本文提出ByITFL,一种新颖的联邦学习方案,该方案在抵御拜占庭用户攻击的同时,能确保用户数据对聚合服务器及其他用户均保持隐私安全。本方案基于现有的非隐私保护方案FLTrust构建,该方案通过信任评分(TS)机制来衰减或放大用户梯度,从而容忍恶意用户。信任评分基于ReLU函数计算,我们采用多项式函数对其进行近似。ByITFL中的分布式隐私保护计算通过拉格朗日编码计算、可验证秘密共享和重随机化步骤的组合设计实现。ByITFL是首个具备完全信息论隐私保护的拜占庭容错联邦学习方案。