In this paper, we propose BeamLLM, a vision-aided millimeter-wave (mmWave) beam prediction framework leveraging large language models (LLMs) to address the challenges of high training overhead and latency in mmWave communication systems. By combining computer vision (CV) with LLMs' cross-modal reasoning capabilities, the framework extracts user equipment (UE) positional features from RGB images and aligns visual-temporal features with LLMs' semantic space through reprogramming techniques. Evaluated on a realistic vehicle-to-infrastructure (V2I) scenario, the proposed method achieves 61.01% top-1 accuracy and 97.39% top-3 accuracy in standard prediction tasks, significantly outperforming traditional deep learning models. In few-shot prediction scenarios, the performance degradation is limited to 12.56% (top-1) and 5.55% (top-3) from time sample 1 to 10, demonstrating superior prediction capability.
翻译:本文提出BeamLLM,一种基于大语言模型(LLMs)的视觉辅助毫米波(mmWave)波束预测框架,旨在解决毫米波通信系统中训练开销大、延迟高的挑战。该框架通过结合计算机视觉(CV)与LLMs的跨模态推理能力,从RGB图像中提取用户设备(UE)位置特征,并借助重编程技术将视觉-时序特征对齐至LLMs的语义空间。在真实车对基础设施(V2I)场景下的评估表明,所提方法在标准预测任务中实现了61.01%的Top-1准确率和97.39%的Top-3准确率,显著优于传统深度学习模型。在少样本预测场景中,从时间样本1到10的性能衰减仅限为12.56%(Top-1)和5.55%(Top-3),展现出卓越的预测能力。