Zero-shot Natural Language-Video Localization (NLVL) methods have exhibited promising results in training NLVL models exclusively with raw video data by dynamically generating video segments and pseudo-query annotations. However, existing pseudo-queries often lack grounding in the source video, resulting in unstructured and disjointed content. In this paper, we investigate the effectiveness of commonsense reasoning in zero-shot NLVL. Specifically, we present CORONET, a zero-shot NLVL framework that leverages commonsense to bridge the gap between videos and generated pseudo-queries via a commonsense enhancement module. CORONET employs Graph Convolution Networks (GCN) to encode commonsense information extracted from a knowledge graph, conditioned on the video, and cross-attention mechanisms to enhance the encoded video and pseudo-query representations prior to localization. Through empirical evaluations on two benchmark datasets, we demonstrate that CORONET surpasses both zero-shot and weakly supervised baselines, achieving improvements up to 32.13% across various recall thresholds and up to 6.33% in mIoU. These results underscore the significance of leveraging commonsense reasoning for zero-shot NLVL.
翻译:零样本自然语言视频定位方法通过动态生成视频片段和伪查询标注,在仅使用原始视频数据训练模型方面展现了良好前景。然而,现有伪查询常缺乏与源视频的关联性,导致内容结构松散且缺乏连贯性。本文探究了常识推理在零样本自然语言视频定位中的有效性。具体而言,我们提出CORONET框架——一种利用常识推理弥合视频与生成伪查询之间差距的零样本定位方法,其通过常识增强模块实现这一目标。CORONET采用图卷积网络编码从知识图谱中提取的、以视频为条件的常识信息,并利用交叉注意力机制在定位前增强编码后的视频与伪查询表征。在两个基准数据集上的实证研究表明,CORONET超越了现有的零样本和弱监督基线方法,在不同召回率阈值下实现了最高32.13%的性能提升,平均交并比(mIoU)提升达6.33%。这些结果凸显了常识推理对零样本自然语言视频定位任务的重要意义。