Recently, with the rise of web videos, managing and understanding large-scale video datasets has become increasingly important. Video Large Language Models (VideoLLMs) have emerged in recent years due to their strong video understanding capabilities. However, training and inference processes for VideoLLMs demand vast amounts of data, presenting significant challenges to data management, particularly regarding efficiency, robustness, and effectiveness. In this work, we present KeyVideoLLM, a text-video frame similarity-based keyframe selection method designed to manage VideoLLM data efficiently, robustly, and effectively. Specifically, KeyVideoLLM achieves a remarkable data compression rate of up to 60.9 times, substantially lowering disk space requirements, which proves its high efficiency. Additionally, it maintains a 100% selection success rate across all video formats and scales, enhances processing speed by up to 200 times compared to existing keyframe selection methods, and does not require hyperparameter tuning. Beyond its outstanding efficiency and robustness, KeyVideoLLM further improves model performance in video question-answering tasks during both training and inference stages. Notably, it consistently achieved the state-of-the-art (SoTA) experimental results on diverse datasets.
翻译:近年来,随着网络视频的兴起,大规模视频数据集的管理与理解变得日益重要。得益于其强大的视频理解能力,视频大语言模型(VideoLLMs)应运而生。然而,VideoLLMs的训练与推理过程需要海量数据,这对数据管理提出了巨大挑战,尤其是在效率、鲁棒性和有效性方面。本文提出KeyVideoLLM,一种基于文本-视频帧相似度的关键帧选择方法,旨在高效、鲁棒且有效地管理VideoLLM数据。具体而言,KeyVideoLLM实现了高达60.9倍的数据压缩率,显著降低了存储空间需求,证明了其高效性。此外,该方法在所有视频格式与规模下均保持100%的选择成功率,相较于现有关键帧选择方法,处理速度提升高达200倍,且无需超参数调优。除了卓越的效率与鲁棒性,KeyVideoLLM在训练与推理阶段均能进一步提升模型在视频问答任务中的性能。值得注意的是,该方法在多个数据集上均取得了最先进的实验结果。