Graph incremental learning (GIL), which continuously updates graph models by sequential knowledge acquisition, has garnered significant interest recently. However, existing GIL approaches focus on task-incremental and class-incremental scenarios within a single domain. Graph domain-incremental learning (Domain-IL), aiming at updating models across multiple graph domains, has become critical with the development of graph foundation models (GFMs), but remains unexplored in the literature. In this paper, we propose Graph Domain-Incremental Learning via Knowledge Dientanglement and Preservation (GraphKeeper), to address catastrophic forgetting in Domain-IL scenario from the perspectives of embedding shifts and decision boundary deviations. Specifically, to prevent embedding shifts and confusion across incremental graph domains, we first propose the domain-specific parameter-efficient fine-tuning together with intra- and inter-domain disentanglement objectives. Consequently, to maintain a stable decision boundary, we introduce deviation-free knowledge preservation to continuously fit incremental domains. Additionally, for graphs with unobservable domains, we perform domain-aware distribution discrimination to obtain precise embeddings. Extensive experiments demonstrate the proposed GraphKeeper achieves state-of-the-art results with 6.5%~16.6% improvement over the runner-up with negligible forgetting. Moreover, we show GraphKeeper can be seamlessly integrated with various representative GFMs, highlighting its broad applicative potential.
翻译:图增量学习(GIL)通过顺序知识获取持续更新图模型,近年来受到广泛关注。然而,现有GIL方法主要关注单域内的任务增量与类别增量场景。随着图基础模型(GFMs)的发展,旨在跨多图域更新模型的图域增量学习(Domain-IL)变得至关重要,但现有文献尚未对此进行探索。本文提出基于知识解耦与保持的图域增量学习方法(GraphKeeper),从嵌入漂移和决策边界偏移的角度解决Domain-IL场景中的灾难性遗忘问题。具体而言,为防止增量图域间的嵌入漂移与混淆,我们首先提出结合域内与域间解耦目标的领域特定参数高效微调方法。随后,为维持稳定的决策边界,我们引入无偏差知识保持机制以持续适配增量域。此外,针对域信息不可观测的图数据,我们通过域感知分布判别获取精确嵌入。大量实验表明,GraphKeeper在实现可忽略遗忘的同时,以6.5%~16.6%的性能提升超越次优方法,达到最先进水平。我们进一步证明GraphKeeper可与多种代表性GFMs无缝集成,凸显其广泛的应用潜力。