In the emerging paradigm of edge inference, neural networks (NNs) are partitioned across distributed edge devices that collaboratively perform inference via wireless transmission. However, standard NNs are generally trained in a noiseless environment, creating a mismatch with the noisy channels during edge deployment. In this paper, we address this issue by characterizing the channel-induced performance deterioration as a generalization error against unseen channels. We introduce an augmented NN model that incorporates channel statistics directly into the weight space, allowing us to derive PAC-Bayesian generalization bounds that explicitly quantifies the impact of wireless distortion. We further provide closed-form expressions for practical channels to demonstrate the tractability of these bounds. Inspired by the theoretical results, we propose a channel-aware training algorithm that minimizes a surrogate objective based on the derived bound. Simulations show that the proposed algorithm can effectively improve inference accuracy by leveraging channel statistics, without end-to-end re-training.
翻译:在边缘推理这一新兴范式中,神经网络(NNs)被分割部署于分布式边缘设备上,通过无线传输协作完成推理任务。然而,标准神经网络通常在无噪声环境中训练,这与边缘部署时的噪声信道环境存在失配。本文通过将信道诱导的性能退化刻画为针对未知信道的泛化误差,以解决该问题。我们提出了一种增强型神经网络模型,该模型将信道统计特性直接纳入权重空间,从而能够推导出PAC-Bayesian泛化界,明确量化无线失真造成的影响。我们进一步针对实际信道给出了闭式表达式,以证明这些界限的可处理性。受理论结果启发,我们提出了一种信道感知训练算法,该算法基于推导出的界限最小化代理目标函数。仿真结果表明,所提算法能够有效利用信道统计特性提升推理精度,且无需进行端到端的重新训练。