Large pre-trained language models have become popular for many applications and form an important backbone of many downstream tasks in natural language processing (NLP). Applying 'explainable artificial intelligence' (XAI) techniques to enrich such models' outputs is considered crucial for assuring their quality and shedding light on their inner workings. However, large language models are trained on a plethora of data containing a variety of biases, such as gender biases, affecting model weights and, potentially, behavior. Currently, it is unclear to what extent such biases also impact model explanations in possibly unfavorable ways. We create a gender-controlled text dataset, GECO, in which otherwise identical sentences appear in male and female forms. This gives rise to ground-truth 'world explanations' for gender classification tasks, enabling the objective evaluation of the correctness of XAI methods. We also provide GECOBench, a rigorous quantitative evaluation framework benchmarking popular XAI methods, applying them to pre-trained language models fine-tuned to different degrees. This allows us to investigate how pre-training induces undesirable bias in model explanations and to what extent fine-tuning can mitigate such explanation bias. We show a clear dependency between explanation performance and the number of fine-tuned layers, where XAI methods are observed to particularly benefit from fine-tuning or complete retraining of embedding layers. Remarkably, this relationship holds for models achieving similar classification performance on the same task. With that, we highlight the utility of the proposed gender-controlled dataset and novel benchmarking approach for research and development of novel XAI methods. All code including dataset generation, model training, evaluation and visualization is available at: https://github.com/braindatalab/gecobench
翻译:大型预训练语言模型已在众多应用中广泛使用,并构成了自然语言处理(NLP)中许多下游任务的重要基础。将“可解释人工智能”(XAI)技术应用于增强此类模型的输出,对于确保其质量并揭示其内部工作机制至关重要。然而,大型语言模型在训练过程中使用了包含多种偏见(如性别偏见)的海量数据,这些偏见会影响模型权重乃至潜在行为。目前,尚不清楚此类偏见是否以及在多大程度上会以不利方式影响模型解释。我们创建了一个性别控制文本数据集GECO,其中其他内容完全相同的句子分别以男性和女性形式呈现。这为性别分类任务提供了真实的“世界解释”,从而能够客观评估XAI方法的正确性。我们还提供了GECOBench——一个严格的定量评估框架,对主流XAI方法进行基准测试,并将其应用于经过不同程度微调的预训练语言模型。这使我们能够研究预训练如何导致模型解释中出现不良偏见,以及微调能在多大程度上缓解此类解释偏差。我们发现解释性能与微调层数之间存在明确关联:XAI方法尤其受益于嵌入层的微调或完全重新训练。值得注意的是,这种关联性在相同任务上达到相近分类性能的模型中依然成立。由此,我们强调了所提出的性别控制数据集及新型基准测试方法对于XAI新方法研究与开发的价值。所有代码(包括数据集生成、模型训练、评估与可视化)均公开于:https://github.com/braindatalab/gecobench