Large Language Models (LLMs) have shown to be a great success in a wide range of applications ranging from regular NLP-based use cases to AI agents. LLMs have been trained on a vast corpus of texts from various sources; despite the best efforts during the data pre-processing stage while training the LLMs, they may pick some undesirable information such as personally identifiable information (PII). Consequently, in recent times research in the area of Machine Unlearning (MUL) has become active, the main idea is to force LLMs to forget (unlearn) certain information (e.g., PII) without suffering from performance loss on regular tasks. In this work, we examine the robustness of the existing MUL techniques for their ability to enable leakage-proof forgetting in LLMs. In particular, we examine the effect of data transformation on forgetting, i.e., is an unlearned LLM able to recall forgotten information if there is a change in the format of the input? Our findings on the TOFU dataset highlight the necessity of using diverse data formats to quantify unlearning in LLMs more reliably.
翻译:大型语言模型(LLM)在从常规自然语言处理应用到智能体等广泛领域均展现出卓越性能。LLM通过来自多元渠道的海量文本语料进行训练;尽管在训练阶段已实施严格的数据预处理,模型仍可能习得某些不良信息(如个人可识别信息)。因此,机器学习遗忘领域的研究近期备受关注,其核心目标是在保持常规任务性能的前提下,使LLM能够遗忘特定信息(例如个人可识别信息)。本研究系统评估了现有机器学习遗忘技术在实现防泄漏遗忘方面的鲁棒性。我们重点探究数据变换对遗忘效果的影响——当输入数据格式发生变化时,已完成遗忘训练的LLM是否仍能回忆起被遗忘的信息?基于TOFU数据集的实验结果表明,采用多样化的数据格式对于更可靠地量化LLM遗忘效果具有必要性。