Large-scale vision-language models (VLMs) such as CLIP have gained popularity for their generalizable and expressive multimodal representations. By leveraging large-scale training data with diverse textual metadata, VLMs acquire open-vocabulary capabilities, solving tasks beyond their training scope. This paper investigates the temporal awareness of VLMs, assessing their ability to position visual content in time. We introduce TIME10k, a benchmark dataset of over 10,000 images with temporal ground truth, and evaluate the time-awareness of 37 VLMs by a novel methodology. Our investigation reveals that temporal information is structured along a low-dimensional, non-linear manifold in the VLM embedding space. Based on this insight, we propose methods to derive an explicit ``timeline'' representation from the embedding space. These representations model time and its chronological progression and thereby facilitate temporal reasoning tasks. Our timeline approaches achieve competitive to superior accuracy compared to a prompt-based baseline while being computationally efficient. All code and data are available at https://tekayanidham.github.io/timeline-page/.
翻译:大规模视觉-语言模型(如CLIP)因其可泛化且富有表现力的多模态表征而广受欢迎。通过利用具有多样化文本元数据的大规模训练数据,这些模型获得了开放词汇能力,能够解决超出其训练范围的任务。本文研究了视觉-语言模型的时间感知能力,评估其将视觉内容定位在时间中的能力。我们提出了TIME10k基准数据集,包含超过10,000张带有时间真实标注的图像,并采用一种新颖的方法评估了37个视觉-语言模型的时间感知能力。研究发现,时间信息在视觉-语言模型的嵌入空间中沿着一个低维、非线性的流形结构化。基于这一发现,我们提出了从嵌入空间中提取显式“时间线”表征的方法。这些表征能够建模时间及其时序演进,从而促进时间推理任务。与基于提示的基线方法相比,我们的时间线方法在保持计算高效的同时,实现了具有竞争力乃至更优的准确率。所有代码和数据均可在https://tekayanidham.github.io/timeline-page/获取。