The development of video large multimodal models (LMMs) has been hindered by the difficulty of curating large amounts of high-quality raw data from the web. To address this, we propose an alternative approach by creating a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K. This dataset includes key tasks such as detailed captioning, open-ended question-answering (QA), and multiple-choice QA. By training on this dataset, in combination with existing visual instruction tuning data, we introduce LLaVA-Video, a new video LMM. Our experiments demonstrate that LLaVA-Video achieves strong performance across various video benchmarks, highlighting the effectiveness of our dataset. We plan to release the dataset, its generation pipeline, and the model checkpoints.
翻译:视频大型多模态模型(LMMs)的发展一直受到从网络获取大量高质量原始数据困难的阻碍。为解决这一问题,我们提出了一种替代方案,即专门为视频指令跟随任务创建高质量合成数据集,命名为LLaVA-Video-178K。该数据集包含详细描述、开放式问答(QA)及多项选择问答等关键任务。通过在此数据集并结合现有视觉指令调优数据进行训练,我们提出了LLaVA-Video这一新型视频LMM。实验表明,LLaVA-Video在多种视频基准测试中均表现出色,验证了我们数据集的有效性。我们计划公开该数据集、其生成流程及模型检查点。