Most LLM unlearning methods aim to approximate retrain-from-scratch behaviors with minimal distribution shift, often via alignment-style objectives defined in the prediction space. While effective at reducing forgotten content generation, such approaches may act as suppression: forgotten concepts can persist in representations and remain entangled with retained knowledge. We introduce CLReg, a contrastive representation regularizer that identifies forget features while pushing them away from retain features, explicitly reducing forget-retain interference with minimal shifts on retain features. We provide first theoretical insights that relate representation shaping to entanglement reduction. Across unlearning benchmarks and LLMs of different sizes, CLReg decreases forget-retain representation entanglement that facilitates mainstream unlearning methods without positing extra privacy risks, inspiring future work that reshapes the representation space to remove forget concepts.
翻译:大多数LLM遗忘学习方法旨在以最小的分布偏移近似从头开始训练的行为,通常通过在预测空间中定义的类对齐目标实现。尽管这些方法能有效减少被遗忘内容的生成,但其作用可能类似于抑制:被遗忘的概念仍可能持续存在于表征中,并与保留知识保持纠缠。我们提出了CLReg,一种对比表征正则化器,它能够识别遗忘特征,同时将其推离保留特征,从而在最小化保留特征偏移的前提下,显式减少遗忘-保留干扰。我们首次提供了将表征整形与纠缠减少相联系的理论见解。在不同规模的遗忘学习基准测试和LLMs中,CLReg降低了遗忘-保留表征纠缠,这有助于主流遗忘学习方法,且不会引入额外的隐私风险,为未来通过重塑表征空间来移除遗忘概念的研究提供了启示。