Amidst the advancements in image-based Large Vision-Language Models (image-LVLM), the transition to video-based models (video-LVLM) is hindered by the limited availability of quality video data. This paper addresses the challenge by leveraging the visual commonalities between images and videos to efficiently evolve image-LVLMs into video-LVLMs. We present a cost-effective video-LVLM that enhances model architecture, introduces innovative training strategies, and identifies the most effective types of video instruction data. Our innovative weighted token sampler significantly compresses the visual token numbers of each video frame, effectively cutting computational expenses. We also find that judiciously using just 10% of the video data, compared to prior video-LVLMs, yields impressive results during various training phases. Moreover, we delve into the influence of video instruction data in limited-resource settings, highlighting the significance of incorporating video training data that emphasizes temporal understanding to enhance model performance. The resulting Fewer Tokens and Fewer Videos LVLM (FTFV-LVLM) exhibits exceptional performance across video and image benchmarks, validating our model's design and training approaches.
翻译:在基于图像的大型视觉语言模型(image-LVLM)不断发展的背景下,向视频模型(video-LVLM)的过渡受到高质量视频数据稀缺的制约。本文通过利用图像与视频之间的视觉共性,高效地将图像-LVLM演进为视频-LVLM,以应对这一挑战。我们提出了一种高性价比的视频-LVLM,改进了模型架构,引入了创新的训练策略,并确定了最有效的视频指令数据类型。我们创新的加权标记采样器显著压缩了每帧视频的视觉标记数量,有效降低了计算开销。我们还发现,与先前的视频-LVLM相比,在各训练阶段仅审慎使用10%的视频数据即可获得令人印象深刻的结果。此外,我们深入探讨了有限资源设置下视频指令数据的影响,强调纳入注重时序理解的视频训练数据对提升模型性能的重要性。由此产生的“更少标记与更少视频”LVLM(FTFV-LVLM)在视频和图像基准测试中均表现出卓越性能,验证了我们模型的设计与训练方法。