Referring understanding is a fundamental task that bridges natural language and visual content by localizing objects described in free-form expressions. However, existing works are constrained by limited language expressiveness, lacking the capacity to model object dynamics in spatial numbers and temporal states. To address these limitations, we introduce a new and general referring understanding task, termed referring multi-object tracking (RMOT). Its core idea is to employ a language expression as a semantic cue to guide the prediction of multi-object tracking, comprehensively accounting for variations in object quantity and temporal semantics. Along with RMOT, we introduce a RMOT benchmark named Refer-KITTI-V2, featuring scalable and diverse language expressions. To efficiently generate high-quality annotations covering object dynamics with minimal manual effort, we propose a semi-automatic labeling pipeline that formulates a total of 9,758 language prompts. In addition, we propose TempRMOT, an elegant end-to-end Transformer-based framework for RMOT. At its core is a query-driven Temporal Enhancement Module that represents each object as a Transformer query, enabling long-term spatial-temporal interactions with other objects and past frames to efficiently refine these queries. TempRMOT achieves state-of-the-art performance on both Refer-KITTI and Refer-KITTI-V2, demonstrating the effectiveness of our approach. The source code and dataset is available at https://github.com/zyn213/TempRMOT.
翻译:参考理解是一项基础任务,它通过定位自由形式表达中描述的对象来连接自然语言与视觉内容。然而,现有工作受限于有限的语言表达能力,缺乏对对象在空间数量与时间状态上动态变化的建模能力。为应对这些局限性,我们引入了一项新颖且通用的参考理解任务,称为参考多目标跟踪(RMOT)。其核心思想是使用语言表达作为语义线索来指导多目标跟踪的预测,全面考虑对象数量与时间语义的变化。伴随RMOT,我们引入了一个名为Refer-KITTI-V2的RMOT基准测试集,其语言表达具有可扩展性和多样性。为了以最少的人工投入高效生成覆盖对象动态的高质量标注,我们提出了一种半自动标注流程,共构建了9,758条语言提示。此外,我们提出了TempRMOT,一个用于RMOT的优雅的端到端基于Transformer的框架。其核心是一个查询驱动的时序增强模块,该模块将每个对象表示为一个Transformer查询,使其能够与其他对象及过去帧进行长期时空交互,从而高效地优化这些查询。TempRMOT在Refer-KITTI和Refer-KITTI-V2上均实现了最先进的性能,证明了我们方法的有效性。源代码与数据集可在https://github.com/zyn213/TempRMOT获取。