Federated Adversarial Learning (FAL) is a robust framework for resisting adversarial attacks on federated learning. Although some FAL studies have developed efficient algorithms, they primarily focus on convergence performance and overlook generalization. Generalization is crucial for evaluating algorithm performance on unseen data. However, generalization analysis is more challenging due to non-smooth adversarial loss functions. A common approach to addressing this issue is to leverage smoothness approximation. In this paper, we develop algorithm stability measures to evaluate the generalization performance of two popular FAL algorithms: \textit{Vanilla FAL (VFAL)} and {\it Slack FAL (SFAL)}, using three different smooth approximation methods: 1) \textit{Surrogate Smoothness Approximation (SSA)}, (2) \textit{Randomized Smoothness Approximation (RSA)}, and (3) \textit{Over-Parameterized Smoothness Approximation (OPSA)}. Based on our in-depth analysis, we answer the question of how to properly set the smoothness approximation method to mitigate generalization error in FAL. Moreover, we identify RSA as the most effective method for reducing generalization error. In highly data-heterogeneous scenarios, we also recommend employing SFAL to mitigate the deterioration of generalization performance caused by heterogeneity. Based on our theoretical results, we provide insights to help develop more efficient FAL algorithms, such as designing new metrics and dynamic aggregation rules to mitigate heterogeneity.
翻译:联邦对抗学习是一种用于抵御联邦学习对抗攻击的鲁棒框架。尽管已有部分研究提出了高效算法,但这些工作主要关注收敛性能,而忽视了泛化能力。泛化能力对于评估算法在未见数据上的性能至关重要。然而,由于对抗损失函数的非光滑特性,泛化分析更具挑战性。解决该问题的常用方法是利用平滑性近似。本文通过算法稳定性度量,评估了两种主流联邦对抗学习算法——\textit{Vanilla FAL (VFAL)} 与 {\it Slack FAL (SFAL)}——在使用三种不同平滑近似方法时的泛化性能:1) \textit{替代平滑近似},2) \textit{随机平滑近似},3) \textit{过参数化平滑近似}。基于深入分析,我们回答了如何正确设置平滑性近似方法以降低联邦对抗学习泛化误差的问题。此外,我们发现随机平滑近似是减少泛化误差最有效的方法。在数据高度异质的场景中,我们还建议采用SFAL以缓解异质性导致的泛化性能退化。基于理论结果,我们为开发更高效的联邦对抗学习算法提供了见解,例如设计新的度量指标与动态聚合规则以缓解异质性问题。