We introduce FedSGM, a unified framework for federated constrained optimization that addresses four major challenges in federated learning (FL): functional constraints, communication bottlenecks, local updates, and partial client participation. Building on the switching gradient method, FedSGM provides projection-free, primal-only updates, avoiding expensive dual-variable tuning or inner solvers. To handle communication limits, FedSGM incorporates bi-directional error feedback, correcting the bias introduced by compression while explicitly understanding the interaction between compression noise and multi-step local updates. We derive convergence guarantees showing that the averaged iterate achieves the canonical $\boldsymbol{\mathcal{O}}(1/\sqrt{T})$ rate, with additional high-probability bounds that decouple optimization progress from sampling noise due to partial participation. Additionally, we introduce a soft switching version of FedSGM to stabilize updates near the feasibility boundary. To our knowledge, FedSGM is the first framework to unify functional constraints, compression, multiple local updates, and partial client participation, establishing a theoretically grounded foundation for constrained federated learning. Finally, we validate the theoretical guarantees of FedSGM via experimentation on Neyman-Pearson classification and constrained Markov decision process (CMDP) tasks.
翻译:我们提出了FedSGM,一个用于联邦约束优化的统一框架,旨在解决联邦学习(FL)中的四个主要挑战:函数约束、通信瓶颈、局部更新和部分客户端参与。基于切换梯度方法,FedSGM提供了免投影、仅含原始变量的更新方式,避免了昂贵的对偶变量调优或内部求解器。为应对通信限制,FedSGM引入了双向误差反馈,在明确理解压缩噪声与多步局部更新之间相互作用的同时,校正由压缩引入的偏差。我们推导了收敛性保证,表明平均迭代序列达到经典的 $\boldsymbol{\mathcal{O}}(1/\sqrt{T})$ 收敛速率,并提供了额外的高概率界,将优化进展与部分参与导致的采样噪声解耦。此外,我们引入了FedSGM的软切换版本,以在可行性边界附近稳定更新。据我们所知,FedSGM是首个将函数约束、压缩、多步局部更新和部分客户端参与统一起来的框架,为约束联邦学习奠定了理论基础。最后,我们通过在Neyman-Pearson分类和约束马尔可夫决策过程(CMDP)任务上的实验验证了FedSGM的理论保证。