Multiple object tracking (MOT) from unmanned aerial vehicle (UAV) platforms requires efficient motion modeling. This is because UAV-MOT faces both local object motion and global camera motion. Motion blur also increases the difficulty of detecting large moving objects. Previous UAV motion modeling approaches either focus only on local motion or ignore motion blurring effects, thus limiting their tracking performance and speed. To address these issues, we propose the Motion Mamba Module, which explores both local and global motion features through cross-correlation and bi-directional Mamba Modules for better motion modeling. To address the detection difficulties caused by motion blur, we also design motion margin loss to effectively improve the detection accuracy of motion blurred objects. Based on the Motion Mamba module and motion margin loss, our proposed MM-Tracker surpasses the state-of-the-art in two widely open-source UAV-MOT datasets. Code will be available.
翻译:无人机平台多目标跟踪需实现高效的运动建模,因其同时面临局部目标运动与全局相机运动的挑战。运动模糊现象进一步增大了大型运动目标的检测难度。现有无人机运动建模方法或仅关注局部运动,或忽略运动模糊效应,从而限制了其跟踪性能与速度。为解决这些问题,我们提出运动Mamba模块,该模块通过互相关与双向Mamba模块协同探索局部与全局运动特征,以实现更优的运动建模。针对运动模糊导致的检测困难,我们设计了运动边界损失函数,有效提升运动模糊目标的检测精度。基于运动Mamba模块与运动边界损失构建的MM-Tracker,在两个广泛开源的无人机多目标跟踪数据集上超越了现有最优方法。代码即将开源。