We propose a new task, dataset and model for grounded video caption generation. This task unifies captioning and object grounding in video, where the objects in the caption are grounded in the video via temporally consistent bounding boxes. We introduce the following contributions. First, we present a task definition and a manually annotated test dataset for this task, referred to as GROunded Video Caption Generation (GROC). Second, we introduce a large-scale automatic annotation method leveraging an existing model for grounded still image captioning together with an LLM for summarising frame-level captions into temporally consistent captions in video. Furthermore, we prompt the LLM to track by language -- classifying noun phrases from the frame-level captions into noun phrases of the video-level generated caption. We apply this approach to videos from the HowTo100M dataset, which results in a new large-scale training dataset, called HowToGround, with automatically annotated captions and spatio-temporally consistent bounding boxes with coherent natural language labels. Third, we introduce a new grounded video caption generation model, called VideoGround, and train the model on the new automatically annotated HowToGround dataset. Finally, results of our VideoGround model set the state of the art for the new task of grounded video caption generation. We perform extensive ablations and demonstrate the importance of key technical contributions of our model.
翻译:我们提出了一项新任务、数据集及模型,用于基于视频的锚定描述生成。该任务将视频描述与目标锚定相统一,其中描述中的对象通过时间一致性的边界框在视频中实现锚定。我们引入了以下贡献。首先,我们提出了该任务的定义及人工标注的测试数据集,称为基于视频的锚定描述生成(GROC)。其次,我们提出了一种大规模自动标注方法,该方法利用现有的静态图像锚定描述模型,结合大型语言模型(LLM)将帧级描述汇总为视频中时间一致的描述。此外,我们通过提示LLM进行语言追踪——将帧级描述中的名词短语分类归入视频级生成描述的名词短语。我们将此方法应用于HowTo100M数据集中的视频,从而构建了一个新的大规模训练数据集HowToGround,该数据集包含自动标注的描述以及具有连贯自然语言标签的时空一致性边界框。第三,我们提出了一个新的基于视频的锚定描述生成模型VideoGround,并在新构建的自动标注数据集HowToGround上对该模型进行训练。最终,我们的VideoGround模型在基于视频的锚定描述生成这一新任务上取得了最先进的性能。我们进行了广泛的消融实验,验证了模型关键技术贡献的重要性。