Recent advances in multimodal LLMs, have led to several video-text models being proposed for critical video-related tasks. However, most of the previous works support visual input only, essentially muting the audio signal in the video. Few models that support both audio and visual input, are not explicitly trained on audio data. Hence, the effect of audio towards video understanding is largely unexplored. To this end, we propose a model architecture that handles audio-visual inputs explicitly. We train our model with both audio and visual data from a video instruction-tuning dataset. Comparison with vision-only baselines, and other audio-visual models showcase that training on audio data indeed leads to improved grounding of responses. For better evaluation of audio-visual models, we also release a human-annotated benchmark dataset, with audio-aware question-answer pairs.
翻译:近年来,多模态大语言模型(LLMs)的进展催生了多个面向关键视频相关任务的视频-文本模型。然而,先前大多数工作仅支持视觉输入,实质上屏蔽了视频中的音频信号。少数支持音频与视觉输入的模型并未在音频数据上进行显式训练。因此,音频对视频理解的影响在很大程度上尚未得到充分探索。为此,我们提出了一种显式处理视听输入的模型架构。我们使用来自视频指令微调数据集的音频和视觉数据共同训练模型。与仅视觉基线模型及其他视听模型的比较表明,在音频数据上进行训练确实能够提升模型响应的语义基础能力。为了更好地评估视听模型,我们还发布了一个包含音频感知问答对的人工标注基准数据集。