The development of multimodal large language models (MLLMs) has advanced general video understanding. However, existing video evaluation benchmarks primarily focus on non-interactive videos, such as movies and recordings. To fill this gap, this paper proposes the first omnimodal benchmark for interactive livestream videos, LiViBench. It features a diverse set of 24 tasks, highlighting the perceptual, reasoning, and livestream-specific challenges. To efficiently construct the dataset, we design a standardized semi-automatic annotation workflow that incorporates the human-in-the-loop at multiple stages. The workflow leverages multiple MLLMs to form a multi-agent system for comprehensive video description and uses a seed-question-driven method to construct high-quality annotations. All interactive videos in the benchmark include audio, speech, and real-time comments modalities. To enhance models' understanding of interactive videos, we design tailored two-stage instruction-tuning and propose a Video-to-Comment Retrieval (VCR) module to improve the model's ability to utilize real-time comments. Based on these advancements, we develop LiVi-LLM-7B, an MLLM with enhanced knowledge of interactive livestreams. Experiments show that our model outperforms larger open-source models with up to 72B parameters, narrows the gap with leading proprietary models on LiViBench, and achieves enhanced performance on general video benchmarks, including VideoMME, LongVideoBench, MLVU, and VideoEval-Pro.
翻译:多模态大语言模型(MLLMs)的发展推动了通用视频理解的进步。然而,现有的视频评估基准主要关注非交互式视频,如电影和录像。为填补这一空白,本文提出了首个面向交互式直播视频的全模态基准测试——LiViBench。该基准包含24项多样化任务,突显了感知、推理以及直播特有的挑战。为高效构建数据集,我们设计了一套标准化的半自动标注工作流程,在多个阶段引入人机协同机制。该流程利用多个MLLM构建多智能体系统以生成全面的视频描述,并采用种子问题驱动的方法来构建高质量标注。基准中的所有交互式视频均包含音频、语音和实时评论模态。为增强模型对交互式视频的理解能力,我们设计了定制化的两阶段指令微调方法,并提出了视频到评论检索模块以提升模型利用实时评论的能力。基于这些进展,我们开发了LiVi-LLM-7B,这是一个具备增强交互式直播知识的MLLM。实验表明,我们的模型性能优于参数规模高达720亿的开源模型,在LiViBench上缩小了与领先专有模型的差距,并在通用视频基准测试(包括VideoMME、LongVideoBench、MLVU和VideoEval-Pro)上实现了性能提升。