The relay channel, consisting of a source-destination pair and a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strategies, including compress-and-forward (CF), have been proposed. For CF, given the correlated signals at the relay and destination, distributed compression techniques, such as Wyner-Ziv coding, can be harnessed to utilize the relay-to-destination link more efficiently. In light of the recent advancements in neural network-based distributed compression, we revisit the relay channel problem, where we integrate a learned one-shot Wyner--Ziv compressor into a primitive relay channel with a finite-capacity and orthogonal (or out-of-band) relay-to-destination link. The resulting neural CF scheme demonstrates that our task-oriented compressor recovers "binning" of the quantized indices at the relay, mimicking the optimal asymptotic CF strategy, although no structure exploiting the knowledge of source statistics was imposed into the design. We show that the proposed neural CF scheme, employing finite order modulation, operates closely to the capacity of a primitive relay channel that assumes a Gaussian codebook. Our learned compressor provides the first proof-of-concept work toward a practical neural CF relaying scheme.
翻译:中继信道由源-目的地对和一个中继组成,是协作通信的基本组成部分。尽管一般中继信道的容量仍未知,但包括压缩-转发(CF)在内的多种中继策略已被提出。对于CF策略而言,考虑到中继和目的地处的相关信号,可以利用分布式压缩技术(如Wyner-Ziv编码)来更高效地利用中继到目的地的链路。基于神经网络分布式压缩的最新进展,我们重新审视了中继信道问题,将学习得到的单次Wyner-Ziv压缩器集成到具有有限容量且正交(或带外)中继到目的地链路的基本中继信道中。由此产生的神经CF方案表明,尽管在设计中未施加利用源统计知识的特定结构,但我们的任务导向压缩器在中继处恢复了量化索引的"分类"操作,这模拟了最优渐近CF策略。我们证明,采用有限阶调制的神经CF方案性能接近于假设高斯码本的基本中继信道容量。我们的学习型压缩器为实用化神经CF中继方案提供了首个概念验证工作。