Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the embedding as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called "concept scrubbing," which erases target concept information from every layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Code is available at https://github.com/EleutherAI/concept-erasure.
翻译:概念擦除旨在从嵌入表示中移除指定特征。该方法可提升公平性(例如防止分类器利用性别或种族信息)与可解释性(例如通过移除特定概念以观察模型行为变化)。本文提出最小二乘概念擦除方法,这是一种闭式求解方法,可证明在尽可能小地改变嵌入表示的前提下(以广泛范数类别度量),阻止所有线性分类器对目标概念的检测。我们将LEACE应用于大型语言模型,并提出名为“概念擦洗”的新流程,该流程可从网络每一层中消除目标概念信息。我们在两个任务上验证了方法的有效性:测量语言模型对词性信息的依赖程度,以及降低BERT嵌入表示中的性别偏见。代码发布于https://github.com/EleutherAI/concept-erasure。