Incremental object detection is fundamentally challenged by catastrophic forgetting. A major factor contributing to this issue is background shift, where background categories in sequential tasks may overlap with either previously learned or future unseen classes. To address this, we propose a novel method called Class-Agnostic Shared Attribute Base (CASA) that encourages the model to learn category-agnostic attributes shared across incremental classes. Our approach leverages an LLM to generate candidate textual attributes, selects the most relevant ones based on the current training data, and records their importance in an assignment matrix. For subsequent tasks, the retained attributes are frozen, and new attributes are selected from the remaining candidates, ensuring both knowledge retention and adaptability. Extensive experiments on the COCO dataset demonstrate the state-of-the-art performance of our method.
翻译:增量目标检测从根本上受到灾难性遗忘的挑战。导致此问题的一个主要因素是背景偏移,即序列任务中的背景类别可能与先前已学习的类别或未来未见类别重叠。为解决这一问题,我们提出了一种名为类别无关共享属性基(CASA)的新方法,该方法促使模型学习增量类别间共享的类别无关属性。我们的方法利用大语言模型生成候选文本属性,基于当前训练数据选择最相关的属性,并在分配矩阵中记录其重要性。对于后续任务,保留的属性被冻结,新属性从剩余候选中选取,从而确保知识保留与适应性的平衡。在COCO数据集上的大量实验证明了我们方法的最先进性能。