Deep learning has enabled significant advances in feedback-based channel coding, yet existing learned schemes remain fundamentally limited: they employ fixed block lengths, suffer degraded performance at high rates, and cannot fully exploit the adaptive potential of feedback. This paper introduces Deep Variable-Length Feedback (DeepVLF) coding, a flexible coding framework that dynamically adjusts transmission length via learned feedback. We propose two complementary architectures: DeepVLF-R, where termination is receiver-driven, and DeepVLF-T, where the transmitter controls termination. Both architectures leverage bit-group partitioning and transformer-based encoder-decoder networks to enable fine-grained rate adaptation in response to feedback. Evaluations over AWGN and 5G-NR fading channels demonstrate that DeepVLF substantially outperforms state-of-the-art learned feedback codes. It achieves the same block error rate with 20%-55% fewer channel uses and lowers error floors by orders of magnitude, particularly in high-rate regimes. Encoding dynamics analysis further reveals that the models autonomously learn a two-phase strategy analogous to classical Schalkwijk-Kailath coding: an initial information-carrying phase followed by a noise-cancellation refinement phase. This emergent behavior underscores the interpretability and information-theoretic alignment of the learned codes.
翻译:深度学习在基于反馈的信道编码领域取得了显著进展,但现有的学习型方案仍存在根本性限制:它们采用固定块长度,在高码率下性能下降,且无法充分利用反馈的自适应潜力。本文提出深度可变长度反馈(DeepVLF)编码,这是一种通过学习的反馈动态调整传输长度的灵活编码框架。我们提出了两种互补的架构:DeepVLF-R(接收端驱动终止)和DeepVLF-T(发送端控制终止)。两种架构均利用比特组分块和基于Transformer的编码器-解码器网络,以实现响应反馈的细粒度码率自适应。在AWGN和5G-NR衰落信道上的评估表明,DeepVLF显著优于最先进的学习型反馈编码方案。它在达到相同块错误率时,信道使用次数减少20%-55%,并将错误平层降低了数个数量级,尤其是在高码率区域。编码动态分析进一步揭示,模型自主学习了一种类似于经典Schalkwijk-Kailath编码的两阶段策略:初始的信息承载阶段和随后的噪声消除精炼阶段。这种涌现行为强调了所学编码的可解释性及其与信息论原理的一致性。