In this paper, we propose a novel method for enhancing security in privacy-preserving federated learning using the Vision Transformer. In federated learning, learning is performed by collecting updated information without collecting raw data from each client. However, the problem is that this raw data may be inferred from the updated information. Conventional data-guessing countermeasures (security enhancement methods) for addressing this issue have a trade-off relationship between privacy protection strength and learning efficiency, and they generally degrade model performance. In this paper, we propose a novel method of federated learning that does not degrade model performance and that is robust against data-guessing attacks on updated information. In the proposed method, each client independently prepares a sequence of binary (0 or 1) random numbers, multiplies it by the updated information, and sends it to a server for model learning. In experiments, the effectiveness of the proposed method is confirmed in terms of model performance and resistance to the APRIL (Attention PRIvacy Leakage) restoration attack.
翻译:本文提出了一种利用Vision Transformer增强隐私保护联邦学习安全性的新方法。在联邦学习中,通过收集更新信息而非原始数据进行模型训练,但存在从更新信息反推原始数据的风险。针对此问题的传统数据推测防御措施(安全增强方法)通常在隐私保护强度与学习效率间存在权衡关系,且普遍导致模型性能下降。本文提出了一种新型联邦学习方法,在保持模型性能的同时,能有效抵御针对更新信息的数据推测攻击。该方法要求各客户端独立生成二元(0或1)随机数序列,将其与更新信息相乘后发送至服务器进行模型学习。实验结果表明,所提方法在模型性能及抵御APRIL(注意力隐私泄露)恢复攻击方面均表现出显著优势。