Massive multiple-input multiple-output (mMIMO) downlink precoding offers high spectral efficiency but remains challenging to deploy in practice because near-optimal algorithms such as the weighted minimum mean squared error (WMMSE) are computationally expensive, and sensitive to SNR and channel-estimation quality, while existing deep learning (DL)-based solutions often lack robustness and require retraining for each deployment site. This paper proposes a plug-and-play precoder (PaPP), a DL framework with a backbone that can be trained for either fully digital (FDP) or hybrid beamforming (HBF) precoding and reused across sites, transmit-power levels, and with varying amounts of channel estimation error, avoiding the need to train a new model from scratch at each deployment. PaPP combines a high-capacity teacher and a compact student with a self-supervised loss that balances teacher imitation and normalized sum-rate, trained using meta-learning domain-generalization and transmit-power-aware input normalization. Numerical results on ray-tracing data from three unseen sites show that the PaPP FDP and HBF models both outperform conventional and deep learning baselines, after fine-tuning with a small set of local unlabeled samples. Across both architectures, PaPP achieves more than 21$\times$ reduction in modeled computation energy and maintains good performance under channel-estimation errors, making it a practical solution for energy-efficient mMIMO precoding.
翻译:大规模多输入多输出(mMIMO)下行链路预编码能够提供高频谱效率,但在实际部署中仍面临挑战:诸如加权最小均方误差(WMMSE)等接近最优的算法计算开销大,且对信噪比和信道估计质量敏感;而现有基于深度学习(DL)的方案往往缺乏鲁棒性,且需要在每个部署站点重新训练。本文提出一种即插即用预编码器(PaPP),这是一种深度学习框架,其主干网络可针对全数字(FDP)或混合波束赋形(HBF)预编码进行训练,并可在不同站点、不同发射功率水平以及存在不同程度信道估计误差的情况下复用,从而避免了在每个部署点从头训练新模型的需求。PaPP结合了一个高容量的教师网络和一个紧凑的学生网络,通过一种平衡教师模仿与归一化和速率的自监督损失函数进行训练,并采用元学习领域泛化与发射功率感知的输入归一化方法。在三个未见站点的射线追踪数据上的数值结果表明,在使用少量本地未标记样本进行微调后,PaPP的FDP和HBF模型均优于传统方法及深度学习基线。在两种架构下,PaPP实现了超过21倍的建模计算能耗降低,并在信道估计误差下保持良好的性能,使其成为能效型大规模MIMO预编码的实用解决方案。