Despite significant advancements in traditional syntactic communications based on Shannon's theory, these methods struggle to meet the requirements of 6G immersive communications, especially under challenging transmission conditions. With the development of generative artificial intelligence (GenAI), progress has been made in reconstructing videos using high-level semantic information. In this paper, we propose a scalable generative video semantic communication framework that extracts and transmits semantic information to achieve high-quality video reconstruction. Specifically, at the transmitter, description and other condition signals (e.g., first frame, sketches, etc.) are extracted from the source video, functioning as text and structural semantics, respectively. At the receiver, the diffusion-based GenAI large models are utilized to fuse the semantics of the multiple modalities for reconstructing the video. Simulation results demonstrate that, at an ultra-low channel bandwidth ratio (CBR), our scheme effectively captures semantic information to reconstruct videos aligned with human perception under different signal-to-noise ratios. Notably, the proposed ``First Frame+Desc." scheme consistently achieves CLIP score exceeding 0.92 at CBR = 0.0057 for SNR > 0 dB. This demonstrates its robust performance even under low SNR conditions.
翻译:尽管基于香农理论的传统语法通信已取得显著进展,但这些方法仍难以满足6G沉浸式通信的需求,尤其是在具有挑战性的传输条件下。随着生成式人工智能(GenAI)的发展,利用高层语义信息重建视频已取得进展。本文提出了一种可扩展的生成式视频语义通信框架,通过提取并传输语义信息以实现高质量的视频重建。具体而言,在发送端,从源视频中提取描述信息及其他条件信号(例如首帧、草图等),分别作为文本语义和结构语义。在接收端,利用基于扩散的GenAI大模型融合多模态语义以重建视频。仿真结果表明,在超低信道带宽比(CBR)下,我们的方案能有效捕获语义信息,在不同信噪比下重建出符合人类感知的视频。值得注意的是,所提出的“首帧+描述”方案在CBR = 0.0057且SNR > 0 dB时,其CLIP分数始终超过0.92。这证明了即使在低信噪比条件下,该方案仍具有稳健的性能。