We present a submission to the SemEval 2025 shared task on unlearning sensitive content from LLMs. Our approach employs negative preference optimization using low-rank adaptation. We show that we can utilize this combination to cheaply compute additional regularization terms, which help with unlearning stabilization. The results of our approach significantly exceed the shared task baselines.
翻译:我们介绍了针对 SemEval 2025 共享任务——从大型语言模型中消除敏感内容——的一项提交方案。我们的方法采用基于低秩自适应的负偏好优化技术。研究表明,我们可以利用这种组合来廉价地计算额外的正则化项,这有助于提升消除过程的稳定性。我们方法的结果显著超越了共享任务的基线水平。