Gradient leakage attacks pose a significant threat to the privacy guarantees of federated learning. While distortion-based protection mechanisms are commonly employed to mitigate this issue, they often lead to notable performance degradation. Existing methods struggle to preserve model performance while ensuring privacy. To address this challenge, we propose a novel data augmentation-based framework designed to achieve a favorable privacy-utility trade-off, with the potential to enhance model performance in certain cases. Our framework incorporates the AugMix algorithm at the client level, enabling data augmentation with controllable severity. By integrating the Jensen-Shannon divergence into the loss function, we embed the distortion introduced by AugMix into the model gradients, effectively safeguarding privacy against deep leakage attacks. Moreover, the JS divergence promotes model consistency across different augmentations of the same image, enhancing both robustness and performance. Extensive experiments on benchmark datasets demonstrate the effectiveness and stability of our method in protecting privacy. Furthermore, our approach maintains, and in some cases improves, model performance, showcasing its ability to achieve a robust privacy-utility trade-off.
翻译:梯度泄漏攻击对联邦学习的隐私保障构成了重大威胁。虽然基于失真的保护机制常被用于缓解此问题,但它们往往导致显著的性能下降。现有方法难以在确保隐私的同时保持模型性能。为应对这一挑战,我们提出了一种基于数据增强的新型框架,旨在实现有利的隐私-效用权衡,并在某些情况下具有提升模型性能的潜力。我们的框架在客户端层面集成了AugMix算法,支持可控强度的数据增强。通过将Jensen-Shannon散度融入损失函数,我们将AugMix引入的失真嵌入模型梯度中,有效防范深度泄漏攻击以保护隐私。此外,JS散度促进了同一图像不同增强版本间的模型一致性,从而增强了鲁棒性和性能。在基准数据集上的大量实验证明了我们方法在保护隐私方面的有效性和稳定性。此外,我们的方法保持并在某些情况下提升了模型性能,展现了其实现稳健隐私-效用权衡的能力。