We present ARTrackV2, which integrates two pivotal aspects of tracking: determining where to look (localization) and how to describe (appearance analysis) the target object across video frames. Building on the foundation of its predecessor, ARTrackV2 extends the concept by introducing a unified generative framework to "read out" object's trajectory and "retell" its appearance in an autoregressive manner. This approach fosters a time-continuous methodology that models the joint evolution of motion and visual features, guided by previous estimates. Furthermore, ARTrackV2 stands out for its efficiency and simplicity, obviating the less efficient intra-frame autoregression and hand-tuned parameters for appearance updates. Despite its simplicity, ARTrackV2 achieves state-of-the-art performance on prevailing benchmark datasets while demonstrating remarkable efficiency improvement. In particular, ARTrackV2 achieves AO score of 79.5\% on GOT-10k, and AUC of 86.1\% on TrackingNet while being $3.6 \times$ faster than ARTrack. The code will be released.
翻译:我们提出ARTrackV2,该模型融合了跟踪的两项核心要素:确定目标在视频帧中的位置(定位)以及描述其外观(外观分析)。在先前工作的基础上,ARTrackV2通过引入统一的生成式框架,以自回归方式“解读”目标轨迹并“复述”其外观。该方法建立了时间连续性的建模机制,在先前估计的引导下,联合演化运动与视觉特征。此外,ARTrackV2以高效性与简洁性见长,省去了低效的帧内自回归过程及用于外观更新的手工调参步骤。尽管结构简洁,ARTrackV2在主流基准数据集上仍取得了最先进的性能,并展现出显著的效率提升。具体而言,ARTrackV2在GOT-10k数据集上达到79.5%的AO得分,在TrackingNet上达到86.1%的AUC值,且速度较ARTrack提升3.6倍。相关代码将公开发布。