Image-Language Foundation Models (ILFMs) have demonstrated remarkable success in vision-language understanding, providing transferable multimodal representations that generalize across diverse downstream image-based tasks. The advancement of video-text research has spurred growing interest in extending image-based models to the video domain. This paradigm, termed as image-to-video transfer learning, effectively mitigates the substantial data and computational demands compared to training video-language models from scratch while achieves comparable or even stronger model performance. This survey provides the first comprehensive review of this emerging field, which begins by summarizing the widely used ILFMs and their capabilities. We then systematically classify existing image-to-video transfer learning techniques into two broad root categories (frozen features and adapted features), along with numerous fine-grained subcategories, based on the paradigm for transferring image understanding capability to video tasks. Building upon the task-specific nature of image-to-video transfer, this survey methodically elaborates these strategies and details their applications across a spectrum of video-text learning tasks, ranging from fine-grained settings (e.g., spatio-temporal video grounding) to coarse-grained ones (e.g., video question answering). We further present a detailed experimental analysis to investigate the efficacy of different image-to-video transfer learning paradigms on a range of downstream video understanding tasks. Finally, we identify prevailing challenges and highlight promising directions for future research. By offering a comprehensive and structured overview, this survey aims to establish a structured roadmap for advancing video-text learning based on existing ILFM, and to inspire future research directions in this rapidly evolving domain. Github repository is available.
翻译:图像-语言基础模型在视觉-语言理解方面取得了显著成功,其提供的可迁移多模态表征能够泛化至多种下游图像任务。视频-文本研究的发展激发了将基于图像的模型扩展至视频领域的日益增长的兴趣。与从头训练视频-语言模型相比,这种被称为图像到视频迁移学习的范式,有效缓解了巨大的数据和计算需求,同时实现了相当甚至更强的模型性能。本综述首次对这一新兴领域进行全面回顾,首先总结了广泛使用的图像-语言基础模型及其能力。随后,我们根据将图像理解能力迁移至视频任务的范式,系统地将现有图像到视频迁移学习技术划分为两大根本类别(冻结特征与适配特征),以及众多细粒度子类别。基于图像到视频迁移的任务特定性质,本综述系统地阐述了这些策略,并详述了它们在一系列视频-文本学习任务中的应用,范围涵盖细粒度设置(例如时空视频定位)到粗粒度设置(例如视频问答)。我们进一步提供了详细的实验分析,以探究不同图像到视频迁移学习范式在一系列下游视频理解任务上的有效性。最后,我们指出了当前面临的挑战,并强调了未来研究的有前景方向。通过提供一个全面且结构化的概述,本综述旨在为基于现有图像-语言基础模型推进视频-文本学习建立一个结构化的路线图,并激发这一快速发展领域的未来研究方向。GitHub仓库已公开。