Incremental unlearning (IU) is critical for pre-trained models to comply with sequential data deletion requests, yet existing methods primarily suppress parameters or confuse knowledge without explicit constraints on both feature and gradient level, resulting in \textit{superficial forgetting} where residual information remains recoverable. This incomplete forgetting risks security breaches and disrupts retention balance, especially in IU scenarios. We propose FG-OrIU (\textbf{F}eature-\textbf{G}radient \textbf{Or}thogonality for \textbf{I}ncremental \textbf{U}nlearning), the first framework unifying orthogonal constraints on both features and gradients level to achieve deep forgetting, where the forgetting effect is irreversible. FG-OrIU decomposes feature spaces via Singular Value Decomposition (SVD), separating forgetting and remaining class features into distinct subspaces. It then enforces dual constraints: feature orthogonal projection on both forgetting and remaining classes, while gradient orthogonal projection prevents the reintroduction of forgotten knowledge and disruption to remaining classes during updates. Additionally, dynamic subspace adaptation merges newly forgetting subspaces and contracts remaining subspaces, ensuring a stable balance between removal and retention across sequential unlearning tasks. Extensive experiments demonstrate the effectiveness of our method.
翻译:增量遗忘学习对于预训练模型响应序列化数据删除请求至关重要,然而现有方法主要通过抑制参数或混淆知识来实现,缺乏在特征和梯度层面的显式约束,导致产生“表面遗忘”,即残留信息仍可恢复。这种不彻底的遗忘存在安全风险并破坏保留平衡,在增量遗忘场景中尤为突出。我们提出了FG-OrIU(面向增量遗忘学习的特征-梯度正交性框架),这是首个在特征和梯度层面统一施加正交约束以实现深度遗忘的框架,其遗忘效果具有不可逆性。FG-OrIU通过奇异值分解将特征空间分解,将待遗忘类与保留类的特征分离到不同的子空间中。随后实施双重约束:对待遗忘类和保留类同时进行特征正交投影,而梯度正交投影则防止在更新过程中重新引入已遗忘知识以及对保留类造成干扰。此外,动态子空间适应机制会合并新增的遗忘子空间并收缩保留子空间,从而在序列化遗忘任务中确保删除与保留之间的稳定平衡。大量实验证明了我们方法的有效性。