Composed Video Retrieval (CoVR) aims to retrieve a video based on a query video and a modifying text. Current CoVR methods fail to fully exploit modern Vision-Language Models (VLMs), either using outdated architectures or requiring computationally expensive fine-tuning and slow caption generation. We introduce PREGEN (PRE GENeration extraction), an efficient and powerful CoVR framework that overcomes these limitations. Our approach uniquely pairs a frozen, pre-trained VLM with a lightweight encoding model, eliminating the need for any VLM fine-tuning. We feed the query video and modifying text into the VLM and extract the hidden state of the final token from each layer. A simple encoder is then trained on these pooled representations, creating a semantically rich and compact embedding for retrieval. PREGEN significantly advances the state of the art, surpassing all prior methods on standard CoVR benchmarks with substantial gains in Recall@1 of +27.23 and +69.59. Our method demonstrates robustness across different VLM backbones and exhibits strong zero-shot generalization to more complex textual modifications, highlighting its effectiveness and semantic capabilities.
翻译:组合视频检索(CoVR)旨在基于查询视频和修改文本检索目标视频。现有CoVR方法未能充分利用现代视觉语言模型(VLM),要么采用过时的架构,要么需要计算成本高昂的微调和缓慢的字幕生成过程。本文提出PREGEN(预生成特征提取)——一种高效且强大的CoVR框架,能够克服这些局限。该方法创新性地将冻结的预训练VLM与轻量级编码模型相结合,无需对VLM进行任何微调。我们将查询视频和修改文本输入VLM,并从每一层提取最终词元的隐藏状态。随后通过简单编码器对这些池化表征进行训练,生成语义丰富且紧凑的检索嵌入向量。PREGEN显著推进了该领域的技术前沿,在标准CoVR基准测试中以Recall@1指标分别提升+27.23和+69.59的显著优势超越所有现有方法。本方法在不同VLM骨干网络中均表现出鲁棒性,并对更复杂的文本修改任务展现出强大的零样本泛化能力,充分体现了其有效性与语义理解潜力。