Rapid development of large language models (LLMs) has significantly advanced multimodal large language models (LMMs), particularly in vision-language tasks. However, existing video-language models often overlook precise temporal localization and struggle with videos of varying lengths. We introduce TimeMarker, a versatile Video-LLM designed for high-quality dialogue based on video content, emphasizing temporal localization. TimeMarker integrates Temporal Separator Tokens to enhance temporal awareness, accurately marking specific moments within videos. It employs the AnyLength mechanism for dynamic frame sampling and adaptive token merging, enabling effective handling of both short and long videos. Additionally, TimeMarker utilizes diverse datasets, including further transformed temporal-related video QA datasets, to bolster its temporal understanding capabilities. Image and interleaved data are also employed to further enhance the model's semantic perception ability. Evaluations demonstrate that TimeMarker achieves state-of-the-art performance across multiple benchmarks, excelling in both short and long video categories. Our project page is at \url{https://github.com/TimeMarker-LLM/TimeMarker/}.
翻译:大型语言模型(LLM)的快速发展显著推动了多模态大语言模型(LMM)的进步,尤其是在视觉语言任务方面。然而,现有的视频语言模型往往忽视精确的时序定位,且难以处理不同时长的视频。我们提出了TimeMarker,这是一种通用的视频大语言模型,旨在基于视频内容进行高质量对话,并特别强调时序定位能力。TimeMarker集成了时序分隔符标记以增强时序感知能力,能够精确标记视频中的特定时刻。它采用AnyLength机制进行动态帧采样和自适应标记合并,从而能够有效处理短视频和长视频。此外,TimeMarker利用多样化数据集,包括进一步转换的时序相关视频问答数据集,以增强其时序理解能力。图像数据和交错数据也被用于进一步提升模型的语义感知能力。评估结果表明,TimeMarker在多个基准测试中均达到了最先进的性能,在短视频和长视频类别中均表现出色。我们的项目页面位于 \url{https://github.com/TimeMarker-LLM/TimeMarker/}。