Federated Learning (FL) has emerged as a promising approach for privacy-preserving model training across decentralized devices. However, it faces challenges such as statistical heterogeneity and susceptibility to adversarial attacks, which can impact model robustness and fairness. Personalized FL attempts to provide some relief by customizing models for individual clients. However, it falls short in addressing server-side aggregation vulnerabilities. We introduce a novel method called \textbf{FedAA}, which optimizes client contributions via \textbf{A}daptive \textbf{A}ggregation to enhance model robustness against malicious clients and ensure fairness across participants in non-identically distributed settings. To achieve this goal, we propose an approach involving a Deep Deterministic Policy Gradient-based algorithm for continuous control of aggregation weights, an innovative client selection method based on model parameter distances, and a reward mechanism guided by validation set performance. Empirically, extensive experiments demonstrate that, in terms of robustness, \textbf{FedAA} outperforms the state-of-the-art methods, while maintaining comparable levels of fairness, offering a promising solution to build resilient and fair federated systems. Our code is available at https://github.com/Gp1g/FedAA.
翻译:联邦学习(Federated Learning, FL)已成为一种在去中心化设备上进行隐私保护模型训练的前沿方法。然而,它面临着统计异质性和易受对抗性攻击等挑战,这些问题可能影响模型的鲁棒性与公平性。个性化联邦学习试图通过为各客户端定制模型来缓解部分问题,但在应对服务器端聚合漏洞方面仍显不足。本文提出一种名为 \textbf{FedAA} 的新方法,该方法通过\textbf{自适应聚合}优化客户端贡献,以增强模型对恶意客户的鲁棒性,并确保在非独立同分布环境下的参与者公平性。为实现这一目标,我们提出了一种基于深度确定性策略梯度算法对聚合权重进行连续控制的方法,一种基于模型参数距离的创新性客户端选择策略,以及一种由验证集性能引导的奖励机制。大量实验表明,在鲁棒性方面,\textbf{FedAA} 优于当前最先进的方法,同时保持了相当的公平性水平,为构建强韧且公平的联邦系统提供了一种有前景的解决方案。我们的代码公开于 https://github.com/Gp1g/FedAA。