Obstacle detection and tracking represent a critical component in robot autonomous navigation. In this paper, we propose ODTFormer, a Transformer-based model to address both obstacle detection and tracking problems. For the detection task, our approach leverages deformable attention to construct a 3D cost volume, which is decoded progressively in the form of voxel occupancy grids. We further track the obstacles by matching the voxels between consecutive frames. The entire model can be optimized in an end-to-end manner. Through extensive experiments on DrivingStereo and KITTI benchmarks, our model achieves state-of-the-art performance in the obstacle detection task. We also report comparable accuracy to state-of-the-art obstacle tracking models while requiring only a fraction of their computation cost, typically ten-fold to twenty-fold less. The code and model weights will be publicly released.
翻译:障碍物检测与跟踪是机器人自主导航的关键组成部分。本文提出ODTFormer,一种基于Transformer的模型,旨在同时解决障碍物检测与跟踪问题。在检测任务中,该方法利用可变形注意力构建三维代价体,并以体素占据网格的形式进行渐进式解码。我们进一步通过匹配连续帧间的体素来实现障碍物跟踪。整个模型可采用端到端方式进行优化。通过在DrivingStereo和KITTI基准数据集上的大量实验,我们的模型在障碍物检测任务中取得了最先进的性能。在障碍物跟踪任务中,本方法仅需现有最优模型计算成本的十分之一至二十分之一,即可达到与之相当的精度。代码与模型权重将公开发布。