Ubiquitous image transmission in emerging applications brings huge overheads to limited wireless resources. Since that text has the characteristic of conveying a large amount of information with very little data, the transmission of the descriptive text of an image can reduce the amount of transmitted data. In this context, this paper develops a novel semantic communication framework based on a text-2-image generative model (Gen-SC). In particular, a transmitter converts the input image to textual modality data. Then the text is transmitted through a noisy channel to the receiver. The receiver then uses the received text to generate images. Additionally, to improve the robustness of text transmission over noisy channels, we designed a transformer-based text transmission codec model. Moreover, we obtained a personalized knowledge base by fine-tuning the diffusion model to meet the requirements of task-oriented transmission scenarios. Simulation results show that the proposed framework can achieve high perceptual quality with reducing the transmitted data volume by up to 99% and is robust to wireless channel noise in terms of portrait image transmission.
翻译:新兴应用中的无处不在图像传输给有限的无线资源带来了巨大开销。由于文本具有以极少数据量传递大量信息的特点,传输图像的描述性文本可以显著减少数据传输量。在此背景下,本文提出了一种基于文本到图像生成模型(Gen-SC)的新型语义通信框架。具体而言,发射端将输入图像转换为文本模态数据,随后文本通过噪声信道传输至接收端。接收端利用接收到的文本生成图像。此外,为提升文本在噪声信道中传输的鲁棒性,我们设计了一种基于Transformer的文本传输编解码模型。同时,通过微调扩散模型构建了个性化知识库,以满足面向任务的传输场景需求。仿真结果表明:所提框架在肖像图像传输中,能够在传输数据量降低高达99%的情况下实现高感知质量,并对无线信道噪声表现出良好的鲁棒性。