This paper tackles the intricate challenge of video question-answering (VideoQA). Despite notable progress, current methods fall short of effectively integrating questions with video frames and semantic object-level abstractions to create question-aware video representations. We introduce Local-Global Question Aware Video Embedding (LGQAVE), which incorporates three major innovations to integrate multi-modal knowledge better and emphasize semantic visual concepts relevant to specific questions. LGQAVE moves beyond traditional ad-hoc frame sampling by utilizing a cross-attention mechanism that precisely identifies the most relevant frames concerning the questions. It captures the dynamics of objects within these frames using distinct graphs, grounding them in question semantics with the miniGPT model. These graphs are processed by a question-aware dynamic graph transformer (Q-DGT), which refines the outputs to develop nuanced global and local video representations. An additional cross-attention module integrates these local and global embeddings to generate the final video embeddings, which a language model uses to generate answers. Extensive evaluations across multiple benchmarks demonstrate that LGQAVE significantly outperforms existing models in delivering accurate multi-choice and open-ended answers.
翻译:本文致力于解决视频问答(VideoQA)这一复杂挑战。尽管已取得显著进展,但现有方法在有效整合问题与视频帧及语义对象级抽象以生成问题感知的视频表示方面仍显不足。我们提出了局部-全局问题感知视频嵌入(LGQAVE),该方法融合了三大创新点,以更好地整合多模态知识并强调与特定问题相关的语义视觉概念。LGQAVE超越了传统的临时性帧采样方法,通过利用交叉注意力机制精准识别与问题最相关的视频帧。它使用不同的图结构捕捉这些帧中对象的动态变化,并借助miniGPT模型将其锚定在问题语义中。这些图由问题感知动态图变换器(Q-DGT)进行处理,该变换器对输出进行细化,从而构建精细的全局与局部视频表示。一个额外的交叉注意力模块整合了这些局部与全局嵌入,生成最终的视频嵌入,随后由语言模型用于生成答案。在多个基准测试上的广泛评估表明,LGQAVE在提供准确的多选和开放式答案方面显著优于现有模型。