Machine unlearning methods have become increasingly important for selective concept removal in large pre-trained models. While recent work has explored unlearning in Euclidean contrastive vision-language models, the effectiveness of concept removal in hyperbolic spaces remains unexplored. This paper investigates machine unlearning in hyperbolic contrastive learning by adapting Alignment Calibration to MERU, a model that embeds images and text in hyperbolic space to better capture semantic hierarchies. Through systematic experiments and ablation studies, we demonstrate that hyperbolic geometry offers distinct advantages for concept removal, achieving near perfect forgetting with reasonable performance on retained concepts, particularly when scaling to multiple concept removal. Our approach introduces hyperbolic-specific components including entailment calibration and norm regularization that leverage the unique properties of hyperbolic space. Comparative analysis with Euclidean models reveals fundamental differences in unlearning dynamics, with hyperbolic unlearning reorganizing the semantic hierarchy while Euclidean approaches merely disconnect cross-modal associations. These findings not only advance machine unlearning techniques but also provide insights into the geometric properties that influence concept representation and removal in multimodal models. Source code available at https://github.com/alex-pv01/HAC
翻译:机器遗忘方法在大型预训练模型的选择性概念移除中日益重要。尽管近期研究探索了欧几里得对比视觉语言模型中的遗忘机制,但双曲空间中的概念移除效果尚未得到验证。本文通过将对齐校准方法适配至MERU模型,研究了双曲对比学习中的机器遗忘问题——该模型将图像与文本嵌入双曲空间以更好地捕捉语义层次结构。通过系统实验与消融研究,我们证明双曲几何为概念移除提供了独特优势,在保留概念上维持合理性能的同时实现了近乎完美的遗忘效果,尤其在扩展至多概念移除时表现突出。我们的方法引入了双曲空间特有的组件,包括利用双曲空间独特性质的蕴涵校准与范数正则化。与欧几里得模型的对比分析揭示了遗忘动态的根本差异:双曲遗忘会重组语义层次,而欧几里得方法仅切断跨模态关联。这些发现不仅推进了机器遗忘技术,也为理解多模态模型中影响概念表征与移除的几何特性提供了新见解。源代码发布于 https://github.com/alex-pv01/HAC