Video encompasses both visual and auditory data, creating a perceptually rich experience where these two modalities complement each other. As such, videos are a valuable type of media for the investigation of the interplay between audio and visual elements. Previous studies of audio-visual modalities primarily focused on either audio-visual representation learning or generative modeling of a modality conditioned on the other, creating a disconnect between these two branches. A unified framework that learns representation and generates modalities has not been developed yet. In this work, we introduce a novel framework called Vision to Audio and Beyond (VAB) to bridge the gap between audio-visual representation learning and vision-to-audio generation. The key approach of VAB is that rather than working with raw video frames and audio data, VAB performs representation learning and generative modeling within latent spaces. In particular, VAB uses a pre-trained audio tokenizer and an image encoder to obtain audio tokens and visual features, respectively. It then performs the pre-training task of visual-conditioned masked audio token prediction. This training strategy enables the model to engage in contextual learning and simultaneous video-to-audio generation. After the pre-training phase, VAB employs the iterative-decoding approach to rapidly generate audio tokens conditioned on visual features. Since VAB is a unified model, its backbone can be fine-tuned for various audio-visual downstream tasks. Our experiments showcase the efficiency of VAB in producing high-quality audio from video, and its capability to acquire semantic audio-visual features, leading to competitive results in audio-visual retrieval and classification.
翻译:视频同时包含视觉与听觉数据,创造出两种模态相互补充的感知丰富体验。因此,视频是研究音频与视觉元素相互作用的宝贵媒介类型。先前关于视听模态的研究主要集中于视听表征学习,或基于另一模态的条件生成建模,导致这两个分支之间存在割裂。目前尚未开发出能够同时学习表征并生成模态的统一框架。本工作提出名为"从视觉到音频及超越"(VAB)的新型框架,以弥合视听表征学习与视觉到音频生成之间的鸿沟。VAB的核心方法在于:不直接处理原始视频帧与音频数据,而是在隐空间内进行表征学习与生成建模。具体而言,VAB使用预训练的音频分词器与图像编码器分别获取音频标记与视觉特征,随后执行视觉条件掩码音频标记预测的预训练任务。该训练策略使模型能够进行上下文学习并实现同步的视频到音频生成。预训练阶段后,VAB采用迭代解码方法基于视觉特征快速生成音频标记。由于VAB是统一模型,其主干网络可通过微调适配多种视听下游任务。实验证明,VAB能够高效地从视频生成高质量音频,并具备获取语义化视听特征的能力,在视听检索与分类任务中取得具有竞争力的结果。