Semantic communications based on deep joint source-channel coding (JSCC) aim to improve communication efficiency by transmitting only task-relevant information. However, ensuring robustness to the stochasticity of communication channels remains a key challenge in learning-based JSCC. In this paper, we propose a novel regularization technique for learning-based JSCC to enhance robustness against channel noise. The proposed method utilizes the Kullback-Leibler (KL) divergence as a regularizer term in the training loss, measuring the discrepancy between two posterior distributions: one under noisy channel conditions (noisy posterior) and one for a noise-free system (noise-free posterior). Reducing this KL divergence mitigates the impact of channel noise on task performance by keeping the noisy posterior close to the noise-free posterior. We further show that the expectation of the KL divergence given the encoded representation can be analytically approximated using the Fisher information matrix and the covariance matrix of the channel noise. Notably, the proposed regularization is architecture-agnostic, making it broadly applicable to general semantic communication systems over noisy channels. Our experimental results validate that the proposed regularization consistently improves task performance across diverse semantic communication systems and channel conditions.
翻译:基于深度联合信源信道编码的语义通信旨在通过仅传输任务相关信息来提高通信效率。然而,确保对通信信道随机性的鲁棒性仍然是基于学习的联合信源信道编码中的一个关键挑战。本文提出了一种用于基于学习的联合信源信道编码的新型正则化技术,以增强对信道噪声的鲁棒性。所提方法利用Kullback-Leibler散度作为训练损失中的正则化项,用于衡量两个后验分布之间的差异:一个是在有噪信道条件下的后验分布,另一个是针对无噪声系统的后验分布。通过减小该KL散度,使有噪后验接近无噪后验,从而减轻信道噪声对任务性能的影响。我们进一步证明,给定编码表示时,该KL散度的期望可以利用费舍尔信息矩阵和信道噪声的协方差矩阵进行解析近似。值得注意的是,所提出的正则化方法与架构无关,使其可广泛应用于噪声信道上的通用语义通信系统。我们的实验结果验证了所提出的正则化方法能在不同的语义通信系统和信道条件下持续提升任务性能。