Counterfactual explanations can be used to interpret and debug text classifiers by producing minimally altered text inputs that change a classifier's output. In this work, we evaluate five methods for generating counterfactual explanations for a BERT text classifier on two datasets using three evaluation metrics. The results of our experiments suggest that established white-box substitution-based methods are effective at generating valid counterfactuals that change the classifier's output. In contrast, newer methods based on large language models (LLMs) excel at producing natural and linguistically plausible text counterfactuals but often fail to generate valid counterfactuals that alter the classifier's output. Based on these results, we recommend developing new counterfactual explanation methods that combine the strengths of established gradient-based approaches and newer LLM-based techniques to generate high-quality, valid, and plausible text counterfactual explanations.
翻译:反事实解释通过生成最小修改的文本输入来改变分类器的输出,可用于解释和调试文本分类器。本研究使用三种评估指标,在两个数据集上评估了针对BERT文本分类器生成反事实解释的五种方法。实验结果表明,成熟的基于替换的白盒方法能有效生成改变分类器输出的有效反事实。相比之下,基于大语言模型(LLMs)的新方法擅长生成自然且语言可信的文本反事实,但往往无法生成能改变分类器输出的有效反事实。基于这些结果,我们建议开发新的反事实解释方法,结合成熟的基于梯度的方法和新兴的基于LLM技术的优势,以生成高质量、有效且可信的文本反事实解释。