Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones. Existing FSCIL methods usually fine-tune the entire backbone, leading to overfitting and hindering the potential to learn new classes. On the other hand, recent prompt-based CIL approaches alleviate forgetting by training prompts with sufficient data in each task. In this work, we propose a novel framework named Attention-aware Self-adaptive Prompt (ASP). ASP encourages task-invariant prompts to capture shared knowledge by reducing specific information from the attention aspect. Additionally, self-adaptive task-specific prompts in ASP provide specific information and transfer knowledge from old classes to new classes with an Information Bottleneck learning objective. In summary, ASP prevents overfitting on base task and does not require enormous data in few-shot incremental tasks. Extensive experiments on three benchmark datasets validate that ASP consistently outperforms state-of-the-art FSCIL and prompt-based CIL methods in terms of both learning new classes and mitigating forgetting.
翻译:少样本类增量学习模型旨在利用稀缺样本增量学习新类别,同时保持对旧类别的知识。现有FSCIL方法通常对整个骨干网络进行微调,容易导致过拟合并阻碍学习新类别的潜力。另一方面,近期基于提示的CIL方法通过在每项任务中使用充足数据训练提示来缓解遗忘问题。本文提出名为注意力感知自适应提示的新型框架ASP。ASP通过从注意力维度减少特定信息,促使任务不变提示捕获共享知识。此外,ASP中的自适应任务特定提示通过信息瓶颈学习目标提供特定信息,并将知识从旧类别迁移至新类别。总体而言,ASP能防止基础任务上的过拟合,且无需在少样本增量任务中使用海量数据。在三个基准数据集上的大量实验验证表明,ASP在学习新类别和缓解遗忘方面均持续优于最先进的FSCIL及基于提示的CIL方法。