Most video compression methods focus on human visual perception, neglecting semantic preservation. This leads to severe semantic loss during the compression, hampering downstream video analysis tasks. In this paper, we propose a Masked Video Modeling (MVM)-powered compression framework that particularly preserves video semantics, by jointly mining and compressing the semantics in a self-supervised manner. While MVM is proficient at learning generalizable semantics through the masked patch prediction task, it may also encode non-semantic information like trivial textural details, wasting bitcost and bringing semantic noises. To suppress this, we explicitly regularize the non-semantic entropy of the compressed video in the MVM token space. The proposed framework is instantiated as a simple Semantic-Mining-then-Compression (SMC) model. Furthermore, we extend SMC as an advanced SMC++ model from several aspects. First, we equip it with a masked motion prediction objective, leading to better temporal semantic learning ability. Second, we introduce a Transformer-based compression module, to improve the semantic compression efficacy. Considering that directly mining the complex redundancy among heterogeneous features in different coding stages is non-trivial, we introduce a compact blueprint semantic representation to align these features into a similar form, fully unleashing the power of the Transformer-based compression module. Extensive results demonstrate the proposed SMC and SMC++ models show remarkable superiority over previous traditional, learnable, and perceptual quality-oriented video codecs, on three video analysis tasks and seven datasets. \textit{Codes and model are available at: https://github.com/tianyuan168326/VideoSemanticCompression-Pytorch.
翻译:大多数视频压缩方法侧重于人类视觉感知,而忽视了语义保持。这导致压缩过程中严重的语义损失,阻碍了下游视频分析任务。本文提出一种基于掩码视频建模(MVM)的压缩框架,通过以自监督方式联合挖掘和压缩语义信息,特别注重保持视频语义。虽然MVM擅长通过掩码块预测任务学习可泛化的语义,但它也可能编码非语义信息(如琐碎的纹理细节),浪费比特成本并引入语义噪声。为抑制此现象,我们在MVM标记空间中显式正则化压缩视频的非语义熵。所提框架被实例化为简单的“语义挖掘后压缩”(SMC)模型。进一步,我们从多个维度将SMC扩展为进阶的SMC++模型。首先,我们为其配备掩码运动预测目标,以提升时序语义学习能力。其次,我们引入基于Transformer的压缩模块,以提高语义压缩效率。考虑到直接挖掘不同编码阶段异构特征间的复杂冗余性具有挑战性,我们引入紧凑的蓝图语义表示,将这些特征对齐至相似形式,充分释放基于Transformer的压缩模块的潜力。大量实验结果表明,在三个视频分析任务和七个数据集上,所提出的SMC与SMC++模型相较于以往传统、可学习及面向感知质量的视频编解码器均展现出显著优越性。\textit{代码与模型发布于:https://github.com/tianyuan168326/VideoSemanticCompression-Pytorch。}