Despite the significant progress of fully-supervised video captioning, zero-shot methods remain much less explored. In this paper, we propose a novel zero-shot video captioning framework named Retrieval-Enhanced Test-Time Adaptation (RETTA), which takes advantage of existing pretrained large-scale vision and language models to directly generate captions with test-time adaptation. Specifically, we bridge video and text using four key models: a general video-text retrieval model XCLIP, a general image-text matching model CLIP, a text alignment model AnglE, and a text generation model GPT-2, due to their source-code availability. The main challenge is how to enable the text generation model to be sufficiently aware of the content in a given video so as to generate corresponding captions. To address this problem, we propose using learnable tokens as a communication medium among these four frozen models GPT-2, XCLIP, CLIP, and AnglE. Different from the conventional way that trains these tokens with training data, we propose to learn these tokens with soft targets of the inference data under several carefully crafted loss functions, which enable the tokens to absorb video information catered for GPT-2. This procedure can be efficiently done in just a few iterations (we use 16 iterations in the experiments) and does not require ground truth data. Extensive experimental results on three widely used datasets, MSR-VTT, MSVD, and VATEX, show absolute 5.1%-32.4% improvements in terms of the main metric CIDEr compared to several state-of-the-art zero-shot video captioning methods.
翻译:尽管全监督视频描述方法已取得显著进展,但零样本方法的研究仍相对有限。本文提出一种名为检索增强型测试时自适应(RETTA)的新型零样本视频描述框架,该框架利用现有的大规模预训练视觉与语言模型,通过测试时自适应直接生成描述文本。具体而言,我们基于四个关键模型构建视频与文本的桥梁:通用视频-文本检索模型XCLIP、通用图像-文本匹配模型CLIP、文本对齐模型AnglE以及文本生成模型GPT-2(选择依据为其源代码的可获取性)。核心挑战在于如何使文本生成模型充分感知给定视频的内容以生成相应描述。为解决此问题,我们提出使用可学习令牌作为GPT-2、XCLIP、CLIP和AnglE这四个冻结模型间的通信媒介。与传统使用训练数据优化令牌的方式不同,我们通过在多个精心设计的损失函数下利用推理数据的软目标来学习这些令牌,使其能够吸收适配GPT-2的视频信息。该过程仅需少量迭代即可高效完成(实验中采用16次迭代),且无需真实标注数据。在MSR-VTT、MSVD和VATEX三个广泛使用的数据集上的大量实验结果表明,相较于多种先进的零样本视频描述方法,本方法在核心评价指标CIDEr上实现了5.1%-32.4%的绝对性能提升。