Incremental object detection aims to simultaneously maintain old-class accuracy and detect emerging new-class objects in incremental data. Most existing distillation-based methods underperform when unlabeled old-class objects are absent in the incremental dataset. While the absence can be mitigated by generating old-class samples, it also incurs high computational costs. In this paper, we argue that the extra computational cost stems from the inconsistency between the detector and the generative model, along with redundant generation. To overcome this problem, we propose Efficient Generated Object Replay (EGOR). Specifically, we generate old-class samples by inversing the original detectors, thus eliminating the necessity of training and storing additional generative models. We also propose augmented replay to reuse the objects in generated samples, thereby reducing the redundant generation. In addition, we propose high-response knowledge distillation focusing on the knowledge related to the old class, which transfers the knowledge in generated objects to the incremental detector. With the addition of the generated objects and losses, we observe a bias towards old classes in the detector. We balance the losses for old and new classes to alleviate the bias, thereby increasing the overall detection accuracy. Extensive experiments conducted on MS COCO 2017 demonstrate that our method can efficiently improve detection performance in the absence of old-class objects.
翻译:增量目标检测旨在同时维持旧类别的检测精度,并在增量数据中检测新出现的类别目标。当增量数据集中缺乏未标注的旧类别目标时,现有大多数基于知识蒸馏的方法表现不佳。虽然可以通过生成旧类别样本来缓解数据缺失问题,但这也会带来高昂的计算成本。本文认为,额外的计算成本源于检测器与生成模型之间的不一致性以及冗余的生成过程。为解决此问题,我们提出了高效生成目标回放(EGOR)方法。具体而言,我们通过反转原始检测器来生成旧类别样本,从而消除了训练和存储额外生成模型的必要性。我们还提出了增强回放策略以复用生成样本中的目标,从而减少冗余生成。此外,我们提出了聚焦旧类别相关知识的高响应知识蒸馏方法,将生成目标中的知识迁移至增量检测器。通过引入生成目标及相应损失函数,我们观察到检测器对旧类别存在偏置。我们通过平衡新旧类别的损失来缓解这种偏置,从而提升整体检测精度。在MS COCO 2017数据集上进行的大量实验表明,本方法能在缺乏旧类别目标的情况下有效提升检测性能。