Implicit Neural Networks (INRs) have emerged as powerful representations to encode all forms of data, including images, videos, audios, and scenes. With video, many INRs for video have been proposed for the compression task, and recent methods feature significant improvements with respect to encoding time, storage, and reconstruction quality. However, these encoded representations lack semantic meaning, so they cannot be used for any downstream tasks that require such properties, such as retrieval. This can act as a barrier for adoption of video INRs over traditional codecs as they do not offer any significant edge apart from compression. To alleviate this, we propose a flexible framework that decouples the spatial and temporal aspects of the video INR. We accomplish this with a dictionary of per-frame latents that are learned jointly with a set of video specific hypernetworks, such that given a latent, these hypernetworks can predict the INR weights to reconstruct the given frame. This framework not only retains the compression efficiency, but the learned latents can be aligned with features from large vision models, which grants them discriminative properties. We align these latents with CLIP and show good performance for both compression and video retrieval tasks. By aligning with VideoLlama, we are able to perform open-ended chat with our learned latents as the visual inputs. Additionally, the learned latents serve as a proxy for the underlying weights, allowing us perform tasks like video interpolation. These semantic properties and applications, existing simultaneously with ability to perform compression, interpolation, and superresolution properties, are a first in this field of work.
翻译:隐式神经网络已成为编码各类数据(包括图像、视频、音频和场景)的强大表示方法。针对视频数据,已有多种用于压缩任务的视频隐式神经网络被提出,且近期方法在编码时间、存储效率和重建质量方面均有显著改进。然而,这些编码表示缺乏语义信息,因此无法应用于需要此类特性的下游任务(如检索)。这可能导致视频隐式神经网络相较于传统编解码器在除压缩外无显著优势的情况下难以被广泛采用。为缓解此问题,我们提出一种灵活框架,将视频隐式神经网络的空间与时间维度解耦。我们通过一个逐帧潜在字典实现这一目标,该字典与一组视频特定的超网络联合学习,使得给定潜在向量时,这些超网络能够预测重建对应帧所需的隐式神经网络权重。该框架不仅保持了压缩效率,而且学习到的潜在向量可与大型视觉模型的特征对齐,从而赋予其判别特性。我们将这些潜在向量与CLIP对齐,并在压缩和视频检索任务中均展现出良好性能。通过与VideoLlama对齐,我们能够以学习到的潜在向量作为视觉输入进行开放式对话。此外,学习到的潜在向量可作为底层权重的代理,使我们能够执行视频插值等任务。这些语义特性与应用能力,与压缩、插值和超分辨率功能的同时存在,在该研究领域尚属首次。