Object tracking is central to robot perception and scene understanding. Tracking-by-detection has long been a dominant paradigm for object tracking of specific object categories. Recently, large-scale pre-trained models have shown promising advances in detecting and segmenting objects and parts in 2D static images in the wild. This begs the question: can we re-purpose these large-scale pre-trained static image models for open-vocabulary video tracking? In this paper, we re-purpose an open-vocabulary detector, segmenter, and dense optical flow estimator, into a model that tracks and segments objects of any category in 2D videos. Our method predicts object and part tracks with associated language descriptions in monocular videos, rebuilding the pipeline of Tractor with modern large pre-trained models for static image detection and segmentation: we detect open-vocabulary object instances and propagate their boxes from frame to frame using a flow-based motion model, refine the propagated boxes with the box regression module of the visual detector, and prompt an open-world segmenter with the refined box to segment the objects. We decide the termination of an object track based on the objectness score of the propagated boxes, as well as forward-backward optical flow consistency. We re-identify objects across occlusions using deep feature matching. We show that our model achieves strong performance on multiple established video object segmentation and tracking benchmarks, and can produce reasonable tracks in manipulation data. In particular, our model outperforms previous state-of-the-art in UVO and BURST, benchmarks for open-world object tracking and segmentation, despite never being explicitly trained for tracking. We hope that our approach can serve as a simple and extensible framework for future research.
翻译:目标追踪是机器人感知与场景理解的核心任务。基于检测的追踪长期以来一直是特定类别目标追踪的主导范式。近年来,大规模预训练模型在静态二维图像中的开放世界目标检测与分割任务上取得了显著进展。这引发了一个关键问题:我们能否将这些大规模预训练的静态图像模型复用于开放词汇视频追踪?本文通过整合开放词汇检测器、分割器与稠密光流估计器,构建了一个能够追踪并分割二维视频中任意类别目标的统一模型。该方法在单目视频中预测目标及其部件的轨迹,并关联相应的语言描述——通过现代大规模预训练模型重建Tractor静态图像检测与分割流水线:首先检测开放词汇目标实例,利用基于光流的运动模型逐帧传播其边界框,借助视觉检测器的边界框回归模块修正传播结果,最后将修正后的边界框输入开放世界分割器以完成目标分割。算法基于传播框的目标性得分与前向-后向光流一致性判定轨迹终止条件,并通过深度特征匹配实现遮挡场景下的跨帧目标重识别。实验表明,本模型在多项视频目标分割与追踪基准测试中取得强劲性能,且在操作数据集上生成合理轨迹。特别地,尽管从未显式针对追踪任务训练,本模型在开放世界目标追踪与分割基准UVO与BURST上超越了此前最优方法。我们期望该框架能为未来研究提供简洁且可扩展的解决方案。