Semantic communication aims to convey meaning for effective task execution, but differing latent representations in AI-native devices can cause semantic mismatches that hinder mutual understanding. This paper introduces a novel approach to mitigating latent space misalignment in multi-agent AI- native semantic communications. In a downlink scenario, we consider an access point (AP) communicating with multiple users to accomplish a specific AI-driven task. Our method implements a protocol that shares a semantic pre-equalizer at the AP and local semantic equalizers at user devices, fostering mutual understanding and task-oriented communication while considering power and complexity constraints. To achieve this, we employ a federated optimization for the decentralized training of the semantic equalizers at the AP and user sides. Numerical results validate the proposed approach in goal-oriented semantic communication, revealing key trade-offs among accuracy, com- munication overhead, complexity, and the semantic proximity of AI-native communication devices.
翻译:语义通信旨在传递意义以实现有效任务执行,但AI原生设备中不同的潜在表示可能导致语义失配,从而阻碍相互理解。本文提出了一种新颖方法,用于缓解多智能体AI原生语义通信中的潜在空间错位问题。在下行链路场景中,我们考虑一个接入点(AP)与多个用户通信以完成特定的AI驱动任务。我们的方法实施了一种协议,在AP端共享语义预均衡器,并在用户设备部署本地语义均衡器,从而在考虑功率与复杂度约束的同时,促进相互理解与面向任务的通信。为实现这一目标,我们采用联邦优化方法对AP端与用户端的语义均衡器进行分布式训练。数值结果验证了所提方法在目标导向语义通信中的有效性,揭示了AI原生通信设备在精度、通信开销、复杂度与语义邻近度之间的关键权衡关系。