In the realm of computer graphics, the ability to learn continuously from non-stationary data streams while adapting to new visual patterns and mitigating catastrophic forgetting is of paramount importance. Existing approaches often struggle to capture and represent the essential characteristics of evolving visual concepts, hindering their applicability to dynamic graphics tasks. In this paper, we propose Ancestral Mamba, a novel approach that integrates online prototype learning into a selective discriminant space model for efficient and robust online continual learning. The key components of our approach include Ancestral Prototype Adaptation (APA), which continuously refines and builds upon learned visual prototypes, and Mamba Feedback (MF), which provides targeted feedback to adapt to challenging visual patterns. APA enables the model to continuously adapt its prototypes, building upon ancestral knowledge to tackle new challenges, while MF acts as a targeted feedback mechanism, focusing on challenging classes and refining their representations. Extensive experiments on graphics-oriented datasets, such as CIFAR-10 and CIFAR-100, demonstrate the superior performance of Ancestral Mamba compared to state-of-the-art baselines, achieving significant improvements in accuracy and forgetting mitigation.
翻译:在计算机图形学领域,从非平稳数据流中持续学习、适应新的视觉模式并缓解灾难性遗忘的能力至关重要。现有方法往往难以捕捉和表示不断演化的视觉概念的本质特征,从而阻碍了其在动态图形任务中的应用。本文提出祖先曼巴,一种将在线原型学习集成到选择性判别空间模型中的新方法,以实现高效鲁棒的在线持续学习。我们方法的核心组件包括祖先原型适应和曼巴反馈。APA能够持续优化并基于已学习的视觉原型进行构建,而MF则提供有针对性的反馈以适应具有挑战性的视觉模式。APA使模型能够持续调整其原型,基于祖先知识应对新挑战;而MF作为一种有针对性的反馈机制,专注于困难类别并优化其表征。在面向图形的数据集(如CIFAR-10和CIFAR-100)上进行的大量实验表明,祖先曼巴相较于最先进的基线方法具有优越性能,在准确性和遗忘缓解方面取得了显著提升。