Memory-based trackers are video object segmentation methods that form the target model by concatenating recently tracked frames into a memory buffer and localize the target by attending the current image to the buffered frames. While already achieving top performance on many benchmarks, it was the recent release of SAM2 that placed memory-based trackers into focus of the visual object tracking community. Nevertheless, modern trackers still struggle in the presence of distractors. We argue that a more sophisticated memory model is required, and propose a new distractor-aware memory model for SAM2 and an introspection-based update strategy that jointly addresses the segmentation accuracy as well as tracking robustness. The resulting tracker is denoted as SAM2.1++. We also propose a new distractor-distilled DiDi dataset to study the distractor problem better. SAM2.1++ outperforms SAM2.1 and related SAM memory extensions on seven benchmarks and sets a solid new state-of-the-art on six of them.
翻译:基于记忆的跟踪器是一种视频目标分割方法,它通过将最近跟踪的帧连接成记忆缓冲区来构建目标模型,并通过将当前图像与缓冲帧进行注意力交互来定位目标。尽管已在多个基准测试中取得顶尖性能,但直到近期SAM2的发布,才使基于记忆的跟踪器成为视觉目标跟踪领域的研究焦点。然而,现代跟踪器在存在干扰物的情况下仍面临挑战。我们认为需要更复杂的记忆模型,为此提出了一种适用于SAM2的新型干扰物感知记忆模型,以及一种基于自省的更新策略,共同解决分割精度与跟踪鲁棒性问题。所得跟踪器记为SAM2.1++。我们还提出了新的干扰物蒸馏数据集DiDi,以更好地研究干扰物问题。SAM2.1++在七个基准测试中超越了SAM2.1及相关SAM记忆扩展方法,并在其中六个测试中确立了坚实的新最优性能。