Quantization is a common approach to mitigate the communication cost of federated learning (FL). In practice, the quantized local parameters are further encoded via an entropy coding technique, such as Huffman coding, for efficient data compression. In this case, the exact communication overhead is determined by the bit rate of the encoded gradients. Recognizing this fact, this work deviates from the existing approaches in the literature and develops a novel quantized FL framework, called \textbf{r}ate-\textbf{c}onstrained \textbf{fed}erated learning (RC-FED), in which the gradients are quantized subject to both fidelity and data rate constraints. We formulate this scheme, as a joint optimization in which the quantization distortion is minimized while the rate of encoded gradients is kept below a target threshold. This enables for a tunable trade-off between quantization distortion and communication cost. We analyze the convergence behavior of RC-FED, and show its superior performance against baseline quantized FL schemes on several datasets.
翻译:量化是缓解联邦学习(FL)通信开销的常用方法。在实际应用中,量化后的本地参数通常通过熵编码技术(如霍夫曼编码)进行进一步编码,以实现高效的数据压缩。在此情况下,确切的通信开销由编码后梯度的比特率决定。基于这一认识,本研究区别于现有文献方法,提出了一种新颖的量化联邦学习框架——**速率约束联邦学习**(RC-FED),该框架在量化梯度时同时考虑保真度约束与数据速率约束。我们将该方案建模为一个联合优化问题:在保持编码梯度速率低于目标阈值的前提下,最小化量化失真。这使得量化失真与通信成本之间可实现可调节的权衡。我们分析了RC-FED的收敛特性,并在多个数据集上验证了其相对于基线量化联邦学习方案的优越性能。