In frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) plays a crucial role in achieving high spectrum and energy efficiency. However, the CSI feedback overhead becomes a major bottleneck as the number of antennas increases. Although existing deep learning-based CSI compression methods have shown great potential, they still face limitations in capturing both local and global features of CSI, thereby limiting achievable compression efficiency. To address these issues, we propose TCLNet, a unified CSI compression framework that integrates a hybrid Transformer-CNN architecture for lossy compression with a hybrid language model (LM) and factorized model (FM) design for lossless compression. The lossy module jointly exploits local features and global context, while the lossless module adaptively switches between context-aware coding and parallel coding to optimize the rate-distortion-complexity (RDC) trade-off. Extensive experiments on both real-world and simulated datasets demonstrate that the proposed TCLNet outperforms existing approaches in terms of reconstruction accuracy and transmission efficiency, achieving up to a 5 dB performance gain across diverse scenarios. Moreover, we show that large language models (LLMs) can be leveraged as zero-shot CSI lossless compressors via carefully designed prompts.
翻译:在频分双工(FDD)大规模多输入多输出(MIMO)系统中,下行链路信道状态信息(CSI)对于实现高频谱效率和能量效率至关重要。然而,随着天线数量的增加,CSI反馈开销成为一个主要瓶颈。尽管现有的基于深度学习的CSI压缩方法已展现出巨大潜力,但在同时捕捉CSI的局部特征和全局特征方面仍存在局限,从而限制了可实现的压缩效率。为解决这些问题,我们提出了TCLNet,一个统一的CSI压缩框架。该框架集成了用于有损压缩的混合Transformer-CNN架构,以及用于无损压缩的混合语言模型(LM)与因子分解模型(FM)设计。有损模块联合利用局部特征与全局上下文,而无损模块则在上下文感知编码与并行编码之间自适应切换,以优化率失真复杂度(RDC)权衡。在真实世界和模拟数据集上进行的大量实验表明,所提出的TCLNet在重建精度和传输效率方面均优于现有方法,在不同场景下实现了高达5 dB的性能增益。此外,我们证明了通过精心设计的提示,大型语言模型(LLMs)可以作为零样本CSI无损压缩器加以利用。