This paper addresses the distributed stochastic minimax optimization problem subject to stochastic constraints. We propose a novel first-order Softmax-Weighted Switching Gradient method tailored for federated learning. Under full client participation, our algorithm achieves the standard $\mathcal{O}(ε^{-4})$ oracle complexity to satisfy a unified bound $ε$ for both the optimality gap and feasibility tolerance. We extend our theoretical analysis to the practical partial participation regime by quantifying client sampling noise through a stochastic superiority assumption. Furthermore, by relaxing standard boundedness assumptions on the objective functions, we establish a strictly tighter lower bound for the softmax hyperparameter. We provide a unified error decomposition and establish a sharp $\mathcal{O}(\log\frac{1}δ)$ high-probability convergence guarantee. Ultimately, our framework demonstrates that a single-loop primal-only switching mechanism provides a stable alternative for optimizing worst-case client performance, effectively bypassing the hyperparameter sensitivity and convergence oscillations often encountered in traditional primal-dual or penalty-based approaches. We verify the efficacy of our algorithm via experiment on the Neyman-Pearson (NP) classification and fair classification tasks.
翻译:本文研究了具有随机约束的分布式随机极小极大优化问题。我们提出了一种新颖的首阶Softmax加权切换梯度方法,专为联邦学习场景设计。在客户端完全参与的情况下,我们的算法实现了标准的$\mathcal{O}(ε^{-4})$预言机复杂度,以满足最优性间隙和可行性容忍度的统一界$ε$。我们通过随机优越性假设量化客户端采样噪声,将理论分析扩展到实际的部分参与机制。此外,通过放宽目标函数的标准有界性假设,我们为softmax超参数建立了严格更紧的下界。我们提供了统一的误差分解,并建立了尖锐的$\mathcal{O}(\log\frac{1}δ)$高概率收敛保证。最终,我们的框架表明,单循环仅原变量的切换机制为优化最差客户端性能提供了一种稳定的替代方案,有效规避了传统原对偶或基于惩罚的方法中常见的超参数敏感性和收敛振荡问题。我们通过在Neyman-Pearson(NP)分类和公平分类任务上的实验验证了算法的有效性。