Federated Learning (FL) enables collaborative training across decentralized data, but faces key challenges of bidirectional communication overhead and client-side data heterogeneity. To address communication costs while embracing data heterogeneity, we propose pFed1BS, a novel personalized federated learning framework that achieves extreme communication compression through one-bit random sketching. In personalized FL, the goal shifts from training a single global model to creating tailored models for each client. In our framework, clients transmit highly compressed one-bit sketches, and the server aggregates and broadcasts a global one-bit consensus. To enable effective personalization, we introduce a sign-based regularizer that guides local models to align with the global consensus while preserving local data characteristics. To mitigate the computational burden of random sketching, we employ the Fast Hadamard Transform for efficient projection. Theoretical analysis guarantees that our algorithm converges to a stationary neighborhood of the global potential function. Numerical simulations demonstrate that pFed1BS substantially reduces communication costs while achieving competitive performance compared to advanced communication-efficient FL algorithms.
翻译:联邦学习(FL)能够实现跨分散数据的协同训练,但面临着双向通信开销和客户端数据异质性的关键挑战。为了在接纳数据异质性的同时解决通信成本问题,我们提出了pFed1BS,一种新颖的个性化联邦学习框架,通过单比特随机草图实现极致的通信压缩。在个性化联邦学习中,目标从训练单一全局模型转变为为每个客户端创建定制化模型。在我们的框架中,客户端传输高度压缩的单比特草图,服务器则聚合并广播一个全局的单比特共识。为了实现有效的个性化,我们引入了一种基于符号的正则化器,引导本地模型与全局共识保持一致,同时保留本地数据特征。为了减轻随机草图的计算负担,我们采用快速哈达玛变换进行高效投影。理论分析保证我们的算法能够收敛到全局势函数的平稳邻域。数值模拟表明,与先进的通信高效联邦学习算法相比,pFed1BS在显著降低通信成本的同时,实现了具有竞争力的性能。