The fundamental limit of Semantic Communications (joint source-channel coding) is established when the transmission needs to be kept covert from an external warden. We derive information-theoretic achievability and matching converse results and we show that source and channel coding separation holds for this setup. Furthermore, we show through an experimental setup that one can train a deep neural network to achieve covert semantic communication for the classification task. Our numerical experiments confirm our theoretical findings, which indicate that for reliable joint source-channel coding the number of transmitted source symbols can only scale as the square-root of the number of channel uses.
翻译:本文建立了当传输需要对外部监视者保持隐蔽时语义通信(联合信源信道编码)的基本极限。我们推导了信息论的可达性结果及匹配的逆定理,并证明了在此设置下信源与信道编码分离定理成立。此外,我们通过实验设置证明,可以训练深度神经网络来实现分类任务的隐蔽语义通信。我们的数值实验证实了理论发现,结果表明:对于可靠的联合信源信道编码,传输的信源符号数量只能随信道使用次数的平方根进行缩放。