Incremental Few-Shot (IFS) segmentation aims to learn new categories over time from only a few annotations. Although widely studied in 2D, it remains underexplored for 3D point clouds. Existing methods suffer from catastrophic forgetting or fail to learn discriminative prototypes under sparse supervision, and often overlook a key cue: novel categories frequently appear as unlabelled background in base-training scenes. We introduce SCOPE (Scene-COntextualised Prototype Enrichment), a plug-and-play background-guided prototype enrichment framework that integrates with any prototype-based 3D segmentation method. After base training, a class-agnostic segmentation model extracts high-confidence pseudo-instances from background regions to build a prototype pool. When novel classes arrive with few labelled samples, relevant background prototypes are retrieved and fused with few-shot prototypes to form enriched representations without retraining the backbone or adding parameters. Experiments on ScanNet and S3DIS show that SCOPE achieves SOTA performance, improving novel-class IoU by up to 6.98% and 3.61%, and mean IoU by 2.25% and 1.70%, respectively, while maintaining low forgetting. Code is available https://github.com/Surrey-UP-Lab/SCOPE.
翻译:增量少样本(IFS)分割旨在通过少量标注逐步学习新类别。尽管该任务在二维领域已得到广泛研究,但在三维点云领域仍处于探索不足的状态。现有方法存在灾难性遗忘问题,或在稀疏监督下难以学习具有判别性的原型,且往往忽视了一个关键线索:新类别常以未标注背景的形式出现在基础训练场景中。我们提出了SCOPE(场景上下文化原型增强框架),这是一种即插即用的背景引导原型增强框架,可与任何基于原型的三维分割方法集成。在基础训练后,一个类别无关的分割模型从背景区域提取高置信度伪实例以构建原型池。当新类别仅携带少量标注样本出现时,系统会检索相关的背景原型,并将其与少样本原型融合,形成增强的表示,而无需重新训练主干网络或增加参数。在ScanNet和S3DIS数据集上的实验表明,SCOPE实现了最先进的性能,分别将新类别的交并比提升了最高6.98%和3.61%,平均交并比提升了2.25%和1.70%,同时保持了较低的遗忘率。代码已开源:https://github.com/Surrey-UP-Lab/SCOPE。