In federated learning, a server must periodically broadcast a model to the agents. We propose to use multi-resolution coding and modulation (also known as non-uniform modulation) for this purpose. In the simplest instance, broadcast transmission is used, whereby all agents are targeted with one and the same transmission (typically without any particular favored beam direction), which is coded using multi-resolution coding/modulation. This enables high-SNR agents, with high path gains to the server, to receive a more accurate model than the low-SNR agents do, without consuming more downlink resources. As one implementation, we use transmission with a non-uniform 8-PSK constellation, where a high-SNR receiver (agent) can separate all 8 constellation points (hence receive 3 bits) whereas a low-SNR receiver can only separate 4 points (hence receive 2 bits). By encoding the least significant information in the third bit, the high-SNR receivers can obtain the model with higher accuracy, while the low-SNR receiver can still obtain the model although with reduced accuracy, thereby facilitating at least some basic participation of the low-SNR receiver. We show the effectiveness of our proposed scheme via experimentation using federated learning with the MNIST data-set.
翻译:在联邦学习中,服务器需定期向各智能体广播模型。为此,我们提出采用多分辨率编码与调制(亦称非均匀调制)方案。在最简实现中,系统采用广播传输机制——所有智能体接收完全相同的传输信号(通常不预设特定波束方向),该信号通过多分辨率编码/调制进行处理。这使得具有高信噪比(即与服务器间路径增益较高)的智能体能够获得比低信噪比智能体更精确的模型,且无需消耗额外下行资源。作为一种具体实现,我们采用非均匀8-PSK星座图进行传输:高信噪比接收端(智能体)可区分全部8个星座点(即接收3比特信息),而低信噪比接收端仅能区分4个星座点(即接收2比特信息)。通过将最次要信息编码于第三比特,高信噪比接收端可获得更高精度模型,而低信噪比接收端仍能获取模型(尽管精度降低),从而保障低信噪比设备至少能实现基础参与。我们通过在MNIST数据集上的联邦学习实验验证了所提方案的有效性。