Federated Learning (FL) has emerged as a machine learning approach able to preserve the privacy of user's data. Applying FL, clients train machine learning models on a local dataset and a central server aggregates the learned parameters coming from the clients, training a global machine learning model without sharing user's data. However, the state-of-the-art shows several approaches to promote attacks on FL systems. For instance, inverting or leaking gradient attacks can find, with high precision, the local dataset used during the training phase of the FL. This paper presents an approach, called Deep Leakage from Gradients with Feedback Blending (DLG-FB), which is able to improve the inverting gradient attack, considering the spatial correlation that typically exists in batches of images. The performed evaluation shows an improvement of 19.18% and 48,82% in terms of attack success rate and the number of iterations per attacked image, respectively.
翻译:联邦学习(Federated Learning,FL)作为一种能够保护用户数据隐私的机器学习方法而兴起。在联邦学习中,客户端在本地数据集上训练机器学习模型,中央服务器聚合来自客户端的已学习参数,从而在不共享用户数据的情况下训练全局机器学习模型。然而,现有研究表明存在多种方法可对联邦学习系统发起攻击。例如,梯度反演或梯度泄露攻击能够以高精度找出联邦学习训练阶段使用的本地数据集。本文提出了一种称为带反馈混合的梯度深度泄露(Deep Leakage from Gradients with Feedback Blending,DLG-FB)的方法,该方法通过考虑图像批次中通常存在的空间相关性,能够改进梯度反演攻击。实验评估表明,该方法在攻击成功率和每幅被攻击图像所需的迭代次数方面分别提升了19.18%和48.82%。