In this work, we compare emergent communication (EC) built upon multi-agent deep reinforcement learning (MADRL) and language-oriented semantic communication (LSC) empowered by a pre-trained large language model (LLM) using human language. In a multi-agent remote navigation task, with multimodal input data comprising location and channel maps, it is shown that EC incurs high training cost and struggles when using multimodal data, whereas LSC yields high inference computing cost due to the LLM's large size. To address their respective bottlenecks, we propose a novel framework of language-guided EC (LEC) by guiding the EC training using LSC via knowledge distillation (KD). Simulations corroborate that LEC achieves faster travel time while avoiding areas with poor channel conditions, as well as speeding up the MADRL training convergence by up to 61.8% compared to EC.
翻译:在本工作中,我们比较了基于多智能体深度强化学习的涌现通信与由预训练大语言模型使用人类语言赋能的语言导向语义通信。在一个包含位置和信道地图的多模态输入数据的多智能体远程导航任务中,研究表明涌现通信在训练成本高昂且处理多模态数据时存在困难,而语言导向语义通信因大语言模型规模庞大导致推理计算成本高。为解决各自瓶颈,我们提出了一种新颖的语言引导涌现通信框架,通过知识蒸馏利用语言导向语义通信指导涌现通信训练。仿真验证表明,语言引导涌现通信在避免信道条件较差区域的同时实现了更快的旅行时间,并且与涌现通信相比,将多智能体深度强化学习训练收敛速度最高提升61.8%。