Federated Learning (FL) can protect the privacy of the vehicles in vehicle edge computing (VEC) to a certain extent through sharing the gradients of vehicles' local models instead of local data. The gradients of vehicles' local models are usually large for the vehicular artificial intelligence (AI) applications, thus transmitting such large gradients would cause large per-round latency. Gradient quantization has been proposed as one effective approach to reduce the per-round latency in FL enabled VEC through compressing gradients and reducing the number of bits, i.e., the quantization level, to transmit gradients. The selection of quantization level and thresholds determines the quantization error, which further affects the model accuracy and training time. To do so, the total training time and quantization error (QE) become two key metrics for the FL enabled VEC. It is critical to jointly optimize the total training time and QE for the FL enabled VEC. However, the time-varying channel condition causes more challenges to solve this problem. In this paper, we propose a distributed deep reinforcement learning (DRL)-based quantization level allocation scheme to optimize the long-term reward in terms of the total training time and QE. Extensive simulations identify the optimal weighted factors between the total training time and QE, and demonstrate the feasibility and effectiveness of the proposed scheme.
翻译:联邦学习(FL)通过共享车辆本地模型的梯度而非本地数据,在一定程度上能够保护车辆边缘计算(VEC)中车辆的隐私。对于车辆人工智能(AI)应用,车辆本地模型的梯度通常较大,因此传输此类大梯度会导致每轮通信延迟较高。梯度量化已被提出作为一种有效方法,通过压缩梯度并减少传输梯度所需的比特数(即量化级别)来降低FL支持的VEC中的每轮延迟。量化级别和阈值的选择决定了量化误差,进而影响模型精度和训练时间。因此,总训练时间和量化误差(QE)成为FL支持的VEC的两个关键指标。对于FL支持的VEC,联合优化总训练时间和QE至关重要。然而,时变的信道条件给解决此问题带来了更多挑战。本文提出了一种基于分布式深度强化学习(DRL)的量化级别分配方案,以优化在总训练时间和QE方面的长期奖励。大量仿真确定了总训练时间与QE之间的最优加权因子,并证明了所提方案的可行性和有效性。