Gradient inversion attacks are often presented as a serious privacy threat in federated learning, with recent work reporting increasingly strong reconstructions under favorable experimental settings. However, it remains unclear whether such attacks are feasible in modern, performance-optimized systems deployed in practice. In this work, we evaluate the practical feasibility of gradient inversion for image-based federated learning. We conduct a systematic study across multiple datasets and tasks, including image classification and object detection, using canonical vision architectures at contemporary resolutions. Our results show that while gradient inversion remains possible for certain legacy or transitional designs under highly restrictive assumptions, modern, performance-optimized models consistently resist meaningful reconstruction visually. We further demonstrate that many reported successes rely on upper-bound settings, such as inference mode operation or architectural simplifications which do not reflect realistic training pipelines. Taken together, our findings indicate that, under an honest-but-curious server assumption, high-fidelity image reconstruction via gradient inversion does not constitute a critical privacy risk in production-optimized federated learning systems, and that practical risk assessments must carefully distinguish diagnostic attack settings from real-world deployments.
翻译:梯度反演攻击常被视为联邦学习中的严重隐私威胁,近期研究在理想实验条件下报告了日益强大的数据重建效果。然而,此类攻击在现代实际部署的性能优化系统中是否可行仍不明确。本研究针对基于图像的联邦学习场景评估梯度反演的实际可行性。我们在多个数据集和任务(包括图像分类与目标检测)上开展系统性实验,采用当代分辨率下的经典视觉架构。结果表明:在高度受限的假设条件下,梯度反演对某些传统或过渡性设计仍可能实现,但现代性能优化模型始终能有效抵抗视觉上具有意义的数据重建。我们进一步证明,许多已报道的成功案例依赖于推理模式运行或架构简化等上限设定,这些设定并不能反映真实的训练流程。综合而言,我们的研究结果表明:在诚实但好奇的服务器假设下,通过梯度反演实现高保真图像重建并不构成生产级优化联邦学习系统的关键隐私风险,实际风险评估必须仔细区分诊断性攻击设定与真实部署场景。