A major reason behind the recent success of large language models (LLMs) is their \textit{in-context learning} capability, which makes it possible to rapidly adapt them to downstream text-based tasks by prompting them with a small number of relevant demonstrations. While large vision-language models (VLMs) have recently been developed for tasks requiring both text and images, they largely lack in-context learning over visual information, especially in understanding and generating text about videos. In this work, we implement \textbf{E}mergent \textbf{I}n-context \textbf{Le}arning on \textbf{V}ideos (\eilev{}), a novel training paradigm that induces in-context learning over video and text by capturing key properties of pre-training data found by prior work to be essential for in-context learning in transformers. In our experiments, we show that \eilev-trained models outperform other off-the-shelf VLMs in few-shot video narration for novel, rare actions. Furthermore, we demonstrate that these key properties of bursty distributions, skewed marginal distributions, and dynamic meaning each contribute to varying degrees to VLMs' in-context learning capability in narrating procedural videos. Our results, analysis, and \eilev{}-trained models yield numerous insights about the emergence of in-context learning over video and text, creating a foundation for future work to optimize and scale VLMs for open-domain video understanding and reasoning. Our code and demo are available at \url{https://github.com/yukw777/EILEV}.
翻译:近期大型语言模型(LLM)成功的关键因素在于其\textit{上下文学习}能力,该能力使得通过少量相关示例提示即可快速适应下游文本任务。尽管当前已开发出需要同时处理文本与图像的大型视觉语言模型(VLM),但此类模型普遍缺乏对视觉信息的上下文学习能力,尤其在视频理解与生成相关文本方面。本研究提出\textbf{视频上下文学习涌现}(\eilev{})训练范式,该范式通过捕捉预训练数据中经先前研究证实对Transformer架构上下文学习至关重要的关键分布特性,从而诱导模型在视频与文本上形成上下文学习能力。实验表明,经过\eilev{}训练的模型在针对新颖罕见动作的小样本视频描述任务中优于其他现成VLM。此外,我们验证了爆发性分布、偏态边缘分布及动态语义这三种关键特性在不同程度上共同促进了VLM在程序性视频描述任务中的上下文学习能力。本研究的结果、分析及\eilev{}训练模型为理解视频与文本上下文学习的涌现机制提供了重要见解,为未来优化和扩展开放域视频理解与推理的VLM奠定了理论基础。代码与演示见\url{https://github.com/yukw777/EILEV}。