Federated AUC maximization is a powerful approach for learning from imbalanced data in federated learning (FL). However, existing methods typically assume full client availability, which is rarely practical. In real-world FL systems, clients often participate in a cyclic manner: joining training according to a fixed, repeating schedule. This setting poses unique optimization challenges for the non-decomposable AUC objective. This paper addresses these challenges by developing and analyzing communication-efficient algorithms for federated AUC maximization under cyclic client participation. We investigate two key settings: First, we study AUC maximization with a squared surrogate loss, which reformulates the problem as a nonconvex-strongly-concave minimax optimization. By leveraging the Polyak-Łojasiewicz (PL) condition, we establish a state-of-the-art communication complexity of $\widetilde{O}(1/ε^{1/2})$ and iteration complexity of $\widetilde{O}(1/ε)$. Second, we consider general pairwise AUC losses. We establish a communication complexity of $O(1/ε^3)$ and an iteration complexity of $O(1/ε^4)$. Further, under the PL condition, these bounds improve to communication complexity of $\widetilde{O}(1/ε^{1/2})$ and iteration complexity of $\widetilde{O}(1/ε)$. Extensive experiments on benchmark tasks in image classification, medical imaging, and fraud detection demonstrate the superior efficiency and effectiveness of our proposed methods.
翻译:联邦AUC最大化是联邦学习中处理不平衡数据的一种强大方法。然而,现有方法通常假设客户端完全可用,这在实践中很少成立。在实际的联邦学习系统中,客户端通常以循环方式参与:按照固定的、重复的时间表加入训练。这种设置给不可分解的AUC目标带来了独特的优化挑战。本文通过开发和分析循环客户端参与下联邦AUC最大化的通信高效算法来解决这些挑战。我们研究了两个关键场景:首先,我们研究了使用平方替代损失的AUC最大化,该问题可重构为非凸-强凹极小极大优化。通过利用Polyak-Łojasiewicz(PL)条件,我们建立了最先进的通信复杂度$\widetilde{O}(1/ε^{1/2})$和迭代复杂度$\widetilde{O}(1/ε)$。其次,我们考虑一般的成对AUC损失。我们建立了$O(1/ε^3)$的通信复杂度和$O(1/ε^4)$的迭代复杂度。此外,在PL条件下,这些界限改进为通信复杂度$\widetilde{O}(1/ε^{1/2})$和迭代复杂度$\widetilde{O}(1/ε)$。在图像分类、医学成像和欺诈检测等基准任务上的大量实验证明了我们提出方法的卓越效率和有效性。