This paper introduces the task of Auditory Referring Multi-Object Tracking (AR-MOT), which dynamically tracks specific objects in a video sequence based on audio expressions and appears as a challenging problem in autonomous driving. Due to the lack of semantic modeling capacity in audio and video, existing works have mainly focused on text-based multi-object tracking, which often comes at the cost of tracking quality, interaction efficiency, and even the safety of assistance systems, limiting the application of such methods in autonomous driving. In this paper, we delve into the problem of AR-MOT from the perspective of audio-video fusion and audio-video tracking. We put forward EchoTrack, an end-to-end AR-MOT framework with dual-stream vision transformers. The dual streams are intertwined with our Bidirectional Frequency-domain Cross-attention Fusion Module (Bi-FCFM), which bidirectionally fuses audio and video features from both frequency- and spatiotemporal domains. Moreover, we propose the Audio-visual Contrastive Tracking Learning (ACTL) regime to extract homogeneous semantic features between expressions and visual objects by learning homogeneous features between different audio and video objects effectively. Aside from the architectural design, we establish the first set of large-scale AR-MOT benchmarks, including Echo-KITTI, Echo-KITTI+, and Echo-BDD. Extensive experiments on the established benchmarks demonstrate the effectiveness of the proposed EchoTrack and its components. The source code and datasets are available at https://github.com/lab206/EchoTrack.
翻译:本文提出了听觉指代多目标跟踪任务,该任务依据音频表达动态追踪视频序列中的特定目标,是自动驾驶领域一个具有挑战性的问题。由于现有方法在音频与视频语义建模能力上的不足,相关研究主要集中于基于文本的多目标跟踪,这往往以牺牲跟踪质量、交互效率乃至辅助系统的安全性为代价,限制了此类方法在自动驾驶中的应用。本文从音视频融合与音视频跟踪的角度深入探讨了听觉指代多目标跟踪问题。我们提出了EchoTrack——一种基于双流视觉Transformer的端到端听觉指代多目标跟踪框架。该双流结构通过我们提出的双向频域交叉注意力融合模块进行交织,该模块从频域和时空域两个维度对音频与视频特征进行双向融合。此外,我们提出了视听对比跟踪学习机制,通过有效学习不同音频与视频对象间的同质特征,提取表达与视觉对象间的同质语义特征。在架构设计之外,我们构建了首个大规模听觉指代多目标跟踪基准数据集,包括Echo-KITTI、Echo-KITTI+和Echo-BDD。在已建立基准上进行的广泛实验验证了所提EchoTrack框架及其组件的有效性。源代码与数据集已公开于https://github.com/lab206/EchoTrack。