This study explores the emerging area of continual panoptic segmentation, highlighting three key balances. First, we introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information. This technique retraces the features associated with past classes based on the final label assignment results, performing knowledge distillation targeting these specific features from the previous model while allowing other features to flexibly adapt to new information. Additionally, we introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data. This strategy maintains a balanced class representation during replay, enhancing the utility of the limited-capacity replay sample set in recalling prior classes. Moreover, recognizing that replay samples are annotated only for the classes of their original step, we devise balanced anti-misguidance losses, which combat the impact of incomplete annotations without incurring classification bias. Building upon these innovations, we present a new method named Balanced Continual Panoptic Segmentation (BalConpas). Our evaluation on the challenging ADE20K dataset demonstrates its superior performance compared to existing state-of-the-art methods. The official code is available at https://github.com/jinpeng0528/BalConpas.
翻译:本研究探索了持续全景分割这一新兴领域,重点阐述了三个关键平衡点。首先,我们引入过去类回溯蒸馏法,以平衡现有知识的稳定性与新信息的适应性。该技术基于最终标签分配结果,回溯与过去类别相关的特征,针对这些特定特征从先前模型中进行知识蒸馏,同时允许其他特征灵活适应新信息。此外,我们提出一种类别比例记忆策略,使回放样本集中的类别分布与历史训练数据保持一致。该策略在回放过程中维持均衡的类别表征,从而提升有限容量回放样本集在回忆先前类别时的效用。再者,考虑到回放样本仅标注了其原始步骤的类别,我们设计了平衡抗误导损失函数,以对抗不完整标注的影响,同时避免引入分类偏差。基于这些创新,我们提出了一种名为平衡持续全景分割的新方法。在具有挑战性的ADE20K数据集上的评估表明,相较于现有最先进方法,本方法展现出更优越的性能。官方代码发布于https://github.com/jinpeng0528/BalConpas。