Language models (LMs) have greatly propelled the research on natural language processing. However, LMs also raise concerns regarding the generation of biased or toxic content and the potential disclosure of private information from the training dataset. In this work, we present a new efficient approach, Ethos, that rectifies LMs to mitigate toxicity and bias in outputs and avoid privacy leakage. Ethos is built on task arithmetic. However, unlike current task arithmetic algorithms, Ethos distinguishes general beneficial and undesired knowledge when reconstructing task vectors. Specifically, Ethos first obtains a set of principal components from the pre-trained models using singular value decomposition. Then, by projecting the task vector onto principal components, Ethos identifies the principal components that encode general or undesired knowledge. Ethos performs negating using the task vector with undesired knowledge only, thereby minimizing collateral damage on general model utility. We demonstrate the efficacy of our approach on three different tasks: debiasing, detoxification, and memorization unlearning. Evaluations show Ethos is more effective in removing undesired knowledge and maintaining the overall model performance compared to current task arithmetic methods.
翻译:摘要:语言模型极大推动了自然语言处理研究的发展。然而,语言模型也引发了对生成偏见或有害内容以及可能泄露训练数据集中隐私信息的担忧。本研究提出一种名为Ethos的新型高效方法,通过修正语言模型来减轻输出中的毒性和偏见,并避免隐私泄露。Ethos基于任务算术构建。但与现有任务算术方法不同,Ethos在重构任务向量时能够区分通用有益知识与不良知识。具体而言,Ethos首先利用奇异值分解从预训练模型中提取一组主成分。随后,通过将任务向量投影到主成分上,Ethos识别出编码通用知识或不良知识的主成分。Ethos仅使用包含不良知识的任务向量执行负向操作,从而最大限度减少对模型通用功能的附带损害。我们在去偏、去毒和记忆遗忘三项不同任务中验证了该方法的有效性。评估表明,与现有任务算术方法相比,Ethos在消除不良知识的同时能更有效地维持模型的整体性能。