Recent advances in integrated sensing and communication (ISAC) unmanned aerial vehicles (UAVs) have enabled their widespread deployment in critical applications such as emergency management. This paper investigates the challenge of efficient multitask multimodal data communication in UAV-assisted ISAC systems, in the considered system model, hyperspectral (HSI) and LiDAR data are collected by UAV-mounted sensors for both target classification and data reconstruction at the terrestrial BS. The limited channel capacity and complex environmental conditions pose significant challenges to effective air-to-ground communication. To tackle this issue, we propose a perception-enhanced multitask multimodal semantic communication (PE-MMSC) system that strategically leverages the onboard computational and sensing capabilities of UAVs. In particular, we first propose a robust multimodal feature fusion method that adaptively combines HSI and LiDAR semantics while considering channel noise and task requirements. Then the method introduces a perception-enhanced (PE) module incorporating attention mechanisms to perform coarse classification on UAV side, thereby optimizing the attention-based multimodal fusion and transmission. Experimental results demonstrate that the proposed PE-MMSC system achieves 5\%--10\% higher target classification accuracy compared to conventional systems without PE module, while maintaining comparable data reconstruction quality with acceptable computational overheads.
翻译:近年来,集成感知与通信(ISAC)无人机技术的进步使其在应急管理等关键应用中得到广泛部署。本文研究了无人机辅助ISAC系统中高效多任务多模态数据通信的挑战。在所考虑的系统模型中,无人机搭载的传感器收集高光谱(HSI)和激光雷达(LiDAR)数据,用于地面基站的目标分类和数据重建。有限的信道容量和复杂的环境条件对有效的空对地通信构成了重大挑战。为解决这一问题,我们提出了一种感知增强型多任务多模态语义通信(PE-MMSC)系统,该系统策略性地利用了无人机的机载计算和感知能力。具体而言,我们首先提出了一种鲁棒的多模态特征融合方法,该方法在考虑信道噪声和任务要求的同时,自适应地结合HSI和LiDAR语义。随后,该方法引入了一个感知增强(PE)模块,该模块结合注意力机制在无人机端进行粗分类,从而优化基于注意力的多模态融合与传输。实验结果表明,与不含PE模块的传统系统相比,所提出的PE-MMSC系统在保持可比较的数据重建质量及可接受的计算开销的同时,实现了5%–10%更高的目标分类准确率。