Spatiotemporal federated learning has recently raised intensive studies due to its ability to train valuable models with only shared gradients in various location-based services. On the other hand, recent studies have shown that shared gradients may be subject to gradient inversion attacks (GIA) on images or texts. However, so far there has not been any systematic study of the gradient inversion attacks in spatiotemporal federated learning. In this paper, we explore the gradient attack problem in spatiotemporal federated learning from attack and defense perspectives. To understand privacy risks in spatiotemporal federated learning, we first propose Spatiotemporal Gradient Inversion Attack (ST-GIA), a gradient attack algorithm tailored to spatiotemporal data that successfully reconstructs the original location from gradients. Furthermore, we design an adaptive defense strategy to mitigate gradient inversion attacks in spatiotemporal federated learning. By dynamically adjusting the perturbation levels, we can offer tailored protection for varying rounds of training data, thereby achieving a better trade-off between privacy and utility than current state-of-the-art methods. Through intensive experimental analysis on three real-world datasets, we reveal that the proposed defense strategy can well preserve the utility of spatiotemporal federated learning with effective security protection.
翻译:时空联邦学习因其能够在各类基于位置的服务中仅通过共享梯度训练出有价值的模型,近期引发了广泛研究。另一方面,近期研究表明,共享梯度可能面临针对图像或文本的梯度反演攻击。然而,迄今为止尚未有关于时空联邦学习中梯度反演攻击的系统性研究。本文从攻击与防御两个视角探讨时空联邦学习中的梯度攻击问题。为理解时空联邦学习中的隐私风险,我们首先提出时空梯度反演攻击算法,这是一种专为时空数据设计的梯度攻击方法,能够成功从梯度中重构原始位置信息。此外,我们设计了一种自适应防御策略来缓解时空联邦学习中的梯度反演攻击。通过动态调整扰动水平,我们能为不同训练轮次的数据提供定制化保护,从而在隐私保护与模型效用之间实现比现有最优方法更优的平衡。基于三个真实数据集的深入实验分析表明,所提出的防御策略能在提供有效安全保护的同时,较好地保持时空联邦学习的模型效用。