In this paper, we propose a Vision-Audio-Language Omni-peRception pretraining model (VALOR) for multi-modal understanding and generation. Different from widely-studied vision-language pretraining models, VALOR jointly models relationships of vision, audio and language in an end-to-end manner. It contains three separate encoders for single modality representations, and a decoder for multimodal conditional text generation. We design two pretext tasks to pretrain VALOR model, including Multimodal Grouping Alignment (MGA) and Multimodal Grouping Captioning (MGC). MGA projects vision, language and audio to the same common space, building vision-language, audio-language and audiovisual-language alignment simultaneously. MGC learns how to generate text tokens in conditions of vision, audio or their both. To promote vision-audio-language pretraining research, we construct a large-scale high-quality tri-modality dataset named VALOR-1M, which contains 1M audiable videos with human annotated audiovisual captions. Extensive experiments show that VALOR can learn strong multimodal correlations and be generalized to various downstream tasks (e.g., retrieval, captioning and question answering), with different input modalities (e.g., vision-language, audio-language and audiovisual-language). VALOR achieves new state-of-the-art performances on series of public cross-modality benchmarks. Code and data are available at project page https://casia-iva-group.github.io/projects/VALOR.
翻译:本文提出了一种用于多模态理解与生成的视觉-音频-语言全感知预训练模型(VALOR)。与广泛研究的视觉语言预训练模型不同,VALOR以端到端方式联合建模视觉、音频和语言之间的关系。该模型包含三个独立的单模态编码器,以及一个用于多模态条件文本生成的解码器。我们设计了两个预训练任务来预训练VALOR模型,包括多模态分组对齐(MGA)和多模态分组描述生成(MGC)。MGA将视觉、语言和音频映射到同一共享空间,同时建立视觉-语言、音频-语言及视听-语言的对齐关系。MGC则学习在视觉、音频或两者共同条件下生成文本标记。为推进视觉-音频-语言预训练研究,我们构建了大规模高质量三模态数据集VALOR-1M,其中包含100万个带有人工标注视听描述的可听视频。大量实验表明,VALOR能够学习强大的多模态关联,并可泛化至多种下游任务(如检索、描述生成和问答),适应不同的输入模态组合(如视觉-语言、音频-语言和视听-语言)。VALOR在系列公开跨模态基准测试中取得了新的最优性能。代码与数据可通过项目页面https://casia-iva-group.github.io/projects/VALOR获取。