In the field of Class Incremental Object Detection (CIOD), creating models that can continuously learn like humans is a major challenge. Pseudo-labeling methods, although initially powerful, struggle with multi-scenario incremental learning due to their tendency to forget past knowledge. To overcome this, we introduce a new approach called Vision-Language Model assisted Pseudo-Labeling (VLM-PL). This technique uses Vision-Language Model (VLM) to verify the correctness of pseudo ground-truths (GTs) without requiring additional model training. VLM-PL starts by deriving pseudo GTs from a pre-trained detector. Then, we generate custom queries for each pseudo GT using carefully designed prompt templates that combine image and text features. This allows the VLM to classify the correctness through its responses. Furthermore, VLM-PL integrates refined pseudo and real GTs from upcoming training, effectively combining new and old knowledge. Extensive experiments conducted on the Pascal VOC and MS COCO datasets not only highlight VLM-PL's exceptional performance in multi-scenario but also illuminate its effectiveness in dual-scenario by achieving state-of-the-art results in both.
翻译:在类增量目标检测(CIOD)领域中,构建像人类一样持续学习的模型是一项重大挑战。伪标签方法虽初始表现强劲,但因易遗忘旧知识而难以应对多场景增量学习。为此,我们提出一种名为“视觉语言模型辅助伪标签”(VLM-PL)的新方法。该技术利用视觉语言模型(VLM)验证伪真实标注(GTs)的正确性,无需额外模型训练。VLM-PL首先从预训练检测器中推导出伪GTs,随后通过精心设计的提示模板(结合图像与文本特征)为每个伪GT生成定制化查询,使VLM能基于其响应判定正确性。此外,VLM-PL通过整合来自后续训练的精细化伪GTs与真实GTs,有效融合新旧知识。在Pascal VOC和MS COCO数据集上的大量实验不仅凸显了VLM-PL在多场景下的卓越性能,还揭示了其在双场景中通过取得最新最佳结果的有效性。