In the emerging paradigm of edge learning, neural networks (NNs) are partitioned across distributed edge devices that collaboratively perform inference via wireless transmission. However, deploying NNs for edge inference over wireless channels inevitably leads to performance degradation, as the exact channel realizations in the inference stage are not known in the training stage. In this paper, we establish a theoretical framework to evaluate and bound this performance degradation. Inspired by statistical learning theory, we define a wireless generalization error to characterize the gap between the empirical performance during training and the expected inference performance under the true stochastic channel. To enable theoretical analysis, we introduce an augmented NN model that incorporates channel statistics directly into the weight space. Leveraging the PAC-Bayesian framework, we derive a high-probability bound on this error, which provides theoretical guarantees for wireless inference performance. Furthermore, we propose a channel-aware training algorithm that minimizes a tractable surrogate objective based on the derived bound. Simulations demonstrate that the proposed algorithm effectively improves wireless inference performance and model robustness under various channel conditions.
翻译:在边缘学习的新兴范式中,神经网络通过无线传输在分布式边缘设备间进行协作推理。然而,在无线信道上部署神经网络进行边缘推理不可避免地会导致性能下降,因为推理阶段的确切信道实现在训练阶段是未知的。本文建立了一个理论框架来评估和限定这种性能下降。受统计学习理论的启发,我们定义了无线泛化误差,用以表征训练时的经验性能与真实随机信道下预期推理性能之间的差异。为便于理论分析,我们引入了一种增强型神经网络模型,将信道统计信息直接融入权重空间。利用PAC-Bayesian框架,我们推导出该误差的高概率上界,为无线推理性能提供了理论保障。此外,我们提出了一种信道感知训练算法,该算法最小化基于推导边界的易处理替代目标。仿真结果表明,所提算法能有效提升不同信道条件下的无线推理性能与模型鲁棒性。