Dense Video Object Captioning (DVOC) is the task of jointly detecting, tracking, and captioning object trajectories in a video, requiring the ability to understand spatio-temporal details and describe them in natural language. Due to the complexity of the task and the high cost associated with manual annotation, previous approaches resort to disjoint training strategies, potentially leading to suboptimal performance. To circumvent this issue, we propose to generate captions about spatio-temporally localized entities leveraging a state-of-the-art VLM. By extending the LVIS and LV-VIS datasets with our synthetic captions (LVISCap and LV-VISCap), we train MaskCaptioner, an end-to-end model capable of jointly detecting, segmenting, tracking and captioning object trajectories. Moreover, with pretraining on LVISCap and LV-VISCap, MaskCaptioner achieves state-of-the-art DVOC results on three existing benchmarks, VidSTG, VLN and BenSMOT. The datasets and code are available at https://www.gabriel.fiastre.fr/maskcaptioner/.
翻译:密集视频物体描述(DVOC)是一项在视频中联合检测、跟踪与描述物体轨迹的任务,要求能够理解时空细节并用自然语言进行描述。由于任务的复杂性及人工标注的高成本,先前方法采用分离式训练策略,可能导致次优性能。为规避此问题,我们提出利用先进视觉语言模型(VLM)生成关于时空局部化实体的描述。通过将LVIS和LV-VIS数据集扩展至包含我们合成的描述(LVISCap与LV-VISCap),我们训练了MaskCaptioner——一个能够端到端联合检测、分割、跟踪并描述物体轨迹的模型。此外,通过在LVISCap和LV-VISCap上进行预训练,MaskCaptioner在三个现有基准测试(VidSTG、VLN和BenSMOT)中实现了最先进的DVOC性能。数据集与代码公开于https://www.gabriel.fiastre.fr/maskcaptioner/。