Incremental object detection (IOD) is challenged by background shift, where background categories in sequential data may include previously learned or future classes. Inspired by the vision-language foundation models such as CLIP, these models capture shared attributes from extensive image-text paired data during pre-training. We propose a novel method utilizing attributes in vision-language foundation models for incremental object detection. Our method constructs a Class-Agnostic Shared Attribute base (CASA) to capture common semantic information among incremental classes. Specifically, we utilize large language models to generate candidate textual attributes and select the most relevant ones based on current training data, recording their significance in an attribute assignment matrix. For subsequent tasks, we freeze the retained attributes and continue selecting from the remaining candidates while updating the attribute assignment matrix accordingly. Furthermore, we employ OWL-ViT as our baseline, preserving the original parameters of the pre-trained foundation model. Our method adds only 0.7% to parameter storage through parameter-efficient fine-tuning to significantly enhance the scalability and adaptability of IOD. Extensive two-phase and multi-phase experiments on the COCO dataset demonstrate the state-of-the-art performance of our proposed method.
翻译:增量目标检测(IOD)面临背景偏移的挑战,即序列数据中的背景类别可能包含先前已学习或未来的类别。受CLIP等视觉语言基础模型的启发,这些模型在预训练过程中从大量图文配对数据中捕获共享属性。我们提出一种新颖方法,利用视觉语言基础模型中的属性进行增量目标检测。该方法构建了一个类无关共享属性基(CASA),以捕获增量类别间的共同语义信息。具体而言,我们利用大语言模型生成候选文本属性,并基于当前训练数据选择最相关的属性,将其重要性记录在属性分配矩阵中。对于后续任务,我们冻结已保留的属性,并继续从剩余候选中进行选择,同时相应更新属性分配矩阵。此外,我们采用OWL-ViT作为基线模型,保留预训练基础模型的原始参数。通过参数高效微调,我们的方法仅增加0.7%的参数存储量,显著提升了IOD的可扩展性和适应性。在COCO数据集上进行的大量两阶段和多阶段实验证明了所提方法的先进性能。