Enlarging input images is a straightforward and effective approach to promote small object detection. However, simple image enlargement is significantly expensive on both computations and GPU memory. In fact, small objects are usually sparsely distributed and locally clustered. Therefore, massive feature extraction computations are wasted on the non-target background area of images. Recent works have tried to pick out target-containing regions using an extra network and perform conventional object detection, but the newly introduced computation limits their final performance. In this paper, we propose to reuse the detector's backbone to conduct feature-level object-seeking and patch-slicing, which can avoid redundant feature extraction and reduce the computation cost. Incorporating a sparse detection head, we are able to detect small objects on high-resolution inputs (e.g., 1080P or larger) for superior performance. The resulting Efficient Small Object Detection (ESOD) approach is a generic framework, which can be applied to both CNN- and ViT-based detectors to save the computation and GPU memory costs. Extensive experiments demonstrate the efficacy and efficiency of our method. In particular, our method consistently surpasses the SOTA detectors by a large margin (e.g., 8% gains on AP) on the representative VisDrone, UAVDT, and TinyPerson datasets. Code is available at https://github.com/alibaba/esod.
翻译:放大输入图像是提升小目标检测性能的一种直接有效方法。然而,简单的图像放大会显著增加计算量和GPU内存消耗。实际上,小目标通常稀疏分布且局部聚集。因此,大量的特征提取计算被浪费在图像的非目标背景区域上。近期研究尝试通过额外网络筛选出包含目标的区域并进行常规目标检测,但新引入的计算限制了其最终性能。本文提出复用检测器的主干网络进行特征级目标搜索与图像块切片,从而避免冗余特征提取并降低计算成本。结合稀疏检测头,我们能够在高分辨率输入(例如1080P或更大)上检测小目标以获得优异性能。由此形成的高效小目标检测(ESOD)方法是一个通用框架,可应用于基于CNN和ViT的检测器以节省计算与GPU内存开销。大量实验证明了我们方法的有效性和高效性。特别地,在具有代表性的VisDrone、UAVDT和TinyPerson数据集上,我们的方法始终以显著优势超越现有最优检测器(例如AP指标提升8%)。代码发布于https://github.com/alibaba/esod。