Bounding box regression is one of the important steps of object detection. However, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. Most of the existing loss functions for rotated object detection calculate the difference between two bounding boxes only focus on the deviation of area or each points distance (e.g., $\mathcal{L}_{Smooth-\ell 1}$, $\mathcal{L}_{RotatedIoU}$ and $\mathcal{L}_{PIoU}$). The calculation process of some loss functions is extremely complex (e.g. $\mathcal{L}_{KFIoU}$). In order to improve the efficiency and accuracy of bounding box regression for rotated object detection, we proposed a novel metric for arbitrary shapes comparison based on minimum points distance, which takes most of the factors from existing loss functions for rotated object detection into account, i.e., the overlap or nonoverlapping area, the central points distance and the rotation angle. We also proposed a loss function called $\mathcal{L}_{FPDIoU}$ based on four points distance for accurate bounding box regression focusing on faster and high quality anchor boxes. In the experiments, $FPDIoU$ loss has been applied to state-of-the-art rotated object detection (e.g., RTMDET, H2RBox) models training with three popular benchmarks of rotated object detection including DOTA, DIOR, HRSC2016 and two benchmarks of arbitrary orientation scene text detection including ICDAR 2017 RRC-MLT and ICDAR 2019 RRC-MLT, which achieves better performance than existing loss functions.
翻译:边界框回归是目标检测的重要步骤之一。然而,旋转检测器通常需要基于SkewIoU的复杂损失函数,这对于基于梯度的训练并不友好。现有的大多数旋转目标检测损失函数通过计算两个边界框的差异来工作,但仅关注面积偏差或各点距离(例如$\mathcal{L}_{Smooth-\ell 1}$、$\mathcal{L}_{RotatedIoU}$和$\mathcal{L}_{PIoU}$)。部分损失函数的计算过程极为复杂(例如$\mathcal{L}_{KFIoU}$)。为了提升旋转目标检测中边界框回归的效率与精度,我们提出了一种基于最小点距离的新型任意形状比较度量,该度量综合了现有旋转目标检测损失函数中的大部分因素,即重叠/非重叠面积、中心点距离和旋转角度。我们还提出了一种基于四点距离的损失函数$\mathcal{L}_{FPDIoU}$,用于实现更快速、更高质量的锚框精准回归。实验中,我们将$FPDIoU$损失应用于当前最先进的旋转目标检测模型(如RTMDET、H2RBox)的训练,采用DOTA、DIOR、HRSC2016三个主流旋转目标检测基准数据集以及ICDAR 2017 RRC-MLT和ICDAR 2019 RRC-MLT两个任意方向场景文本检测基准数据集。实验结果表明,该损失函数相较于现有损失函数取得了更优的性能。