Beamforming in millimeter-wave (mmWave) high-mobility environments typically incurs substantial training overhead. While prior studies suggest that sub-6 GHz channels can be exploited to predict optimal mmWave beams, existing methods depend on large deep learning (DL) models with prohibitive computational and memory requirements. In this paper, we propose a computationally efficient framework for sub-6 GHz channel-mmWave beam mapping based on the knowledge distillation (KD) technique. We develop two compact student DL architectures based on individual and relational distillation strategies, which retain only a few hidden layers yet closely mimic the performance of large teacher DL models. Extensive simulations demonstrate that the proposed student models achieve the teacher's beam prediction accuracy and spectral efficiency while reducing trainable parameters and computational complexity by 99%.
翻译:毫米波(mmWave)高移动性环境中的波束赋形通常会产生大量的训练开销。尽管先前的研究表明可以利用Sub-6 GHz信道来预测最优的毫米波波束,但现有方法依赖于大型深度学习(DL)模型,其计算和内存需求过高。本文提出了一种基于知识蒸馏(KD)技术的、计算高效的Sub-6 GHz信道-毫米波波束映射框架。我们基于个体蒸馏和关系蒸馏两种策略,开发了两种紧凑的学生DL架构,它们仅保留少量隐藏层,却能紧密模仿大型教师DL模型的性能。大量仿真结果表明,所提出的学生模型在将可训练参数和计算复杂度降低99%的同时,达到了与教师模型相当的波束预测精度和频谱效率。