Accurate beam prediction is a key enabler for next-generation wireless communication systems. In this paper, we propose a multimodal large language model (LLM)-based beam prediction framework that effectively utilizes contextual information, provided by sensory data including RGB camera images and LiDAR point clouds. To effectively fuse heterogeneous modalities, we design specialized modality encoders together with a beam-guided attention masking mechanism and a high-frequency temporal alignment strategy, enabling robust cross-modal feature integration under dynamic environments. Furthermore, we construct a large-scale multimodal dataset for communication, named Multimodal-Wireless, which covers diverse weather and traffic conditions with high-fidelity ray-tracing labels. Extensive simulation results demonstrate that the proposed approach significantly reduces the reliance on oracle angle-of-departure knowledge and consistently outperforms state-of-the-art multimodal LLM-based beam prediction methods in terms of beam accuracy and communication performance, improving the average Top-1 accuracy to 80.8% and the average normalized gain to 89.1%.
翻译:精确的波束预测是下一代无线通信系统的关键使能技术。本文提出了一种基于多模态大语言模型的波束预测框架,该框架能有效利用由RGB相机图像和LiDAR点云等传感数据提供的上下文信息。为了有效融合异构模态,我们设计了专门的模态编码器,并结合波束引导的注意力掩码机制和高频时序对齐策略,从而在动态环境下实现鲁棒的跨模态特征整合。此外,我们构建了一个用于通信的大规模多模态数据集,命名为Multimodal-Wireless,该数据集覆盖了多样化的天气和交通条件,并带有高保真的射线追踪标签。大量仿真结果表明,所提方法显著降低了对理想出发角知识的依赖,在波束精度和通信性能方面持续优于最先进的多模态大语言模型波束预测方法,将平均Top-1准确率提升至80.8%,平均归一化增益提升至89.1%。