Federated learning enables collaborative model training across distributed clients while preserving data privacy. However, in practical deployments, device heterogeneity, non-independent, and identically distributed (Non-IID) data often lead to highly unstable and biased gradient updates. When differential privacy is enforced, conventional fixed gradient clipping and Gaussian noise injection may further amplify gradient perturbations, resulting in training oscillation and performance degradation and degraded model performance. To address these challenges, we propose an adaptive differentially private federated learning framework that explicitly targets model efficiency under heterogeneous and privacy-constrained settings. On the client side, a lightweight local compressed module is introduced to regularize intermediate representations and constrain gradient variability, thereby mitigating noise amplification during local optimization. On the server side, an adaptive gradient clipping strategy dynamically adjusts clipping thresholds based on historical update statistics to avoid over-clipping and noise domination. Furthermore, a constraint-aware aggregation mechanism is designed to suppress unreliable or noise-dominated client updates and stabilize global optimization. Extensive experiments on CIFAR-10 and SVHN demonstrate improved convergence stability and classification accuracy.
翻译:联邦学习能够在保护数据隐私的前提下,实现跨分布式客户端的协同模型训练。然而,在实际部署中,设备异构性以及非独立同分布(Non-IID)数据往往导致梯度更新高度不稳定且存在偏差。当强制执行差分隐私时,传统的固定梯度裁剪和高斯噪声注入可能进一步放大梯度扰动,导致训练振荡、性能下降以及模型性能退化。为应对这些挑战,我们提出了一种自适应差分隐私联邦学习框架,该框架明确针对异构和隐私约束设置下的模型效率。在客户端侧,引入了一个轻量级的本地压缩模块,用于正则化中间表示并约束梯度变异性,从而减轻本地优化过程中的噪声放大。在服务器侧,一种自适应梯度裁剪策略根据历史更新统计数据动态调整裁剪阈值,以避免过度裁剪和噪声主导。此外,设计了一种约束感知的聚合机制,以抑制不可靠或噪声主导的客户端更新,并稳定全局优化。在CIFAR-10和SVHN数据集上进行的大量实验证明了该方法在收敛稳定性和分类准确性方面的提升。