Continual learning aims to enable models to acquire new knowledge while retaining previously learned information. Prompt-based methods have shown remarkable performance in this domain; however, they typically rely on key-value pairing, which can introduce inter-task interference and hinder scalability. To overcome these limitations, we propose a novel approach employing task-specific Prompt-Prototype (ProP), thereby eliminating the need for key-value pairs. In our method, task-specific prompts facilitate more effective feature learning for the current task, while corresponding prototypes capture the representative features of the input. During inference, predictions are generated by binding each task-specific prompt with its associated prototype. Additionally, we introduce regularization constraints during prompt initialization to penalize excessively large values, thereby enhancing stability. Experiments on several widely used datasets demonstrate the effectiveness of the proposed method. In contrast to mainstream prompt-based approaches, our framework removes the dependency on key-value pairs, offering a fresh perspective for future continual learning research.
翻译:持续学习旨在使模型能够获取新知识,同时保留先前学习的信息。基于提示的方法在该领域已展现出卓越的性能;然而,这些方法通常依赖于键值配对,这可能引入任务间干扰并阻碍可扩展性。为克服这些限制,我们提出了一种采用任务特定提示-原型(ProP)的新方法,从而消除了对键值对的需求。在我们的方法中,任务特定提示促进了当前任务更有效的特征学习,而相应的原型则捕获了输入的代表性特征。在推理过程中,通过将每个任务特定提示与其关联的原型进行绑定来生成预测。此外,我们在提示初始化阶段引入了正则化约束,以惩罚过大的数值,从而增强稳定性。在多个广泛使用的数据集上进行的实验证明了所提方法的有效性。与主流的基于提示的方法相比,我们的框架消除了对键值对的依赖,为未来的持续学习研究提供了一个新的视角。