The need for interpretability in deep learning has driven interest in counterfactual explanations, which identify minimal changes to an instance that change a model's prediction. Current counterfactual (CF) generation methods require task-specific fine-tuning and produce low-quality text. Large Language Models (LLMs), though effective for high-quality text generation, struggle with label-flipping counterfactuals (i.e., counterfactuals that change the prediction) without fine-tuning. We introduce two simple classifier-guided approaches to support counterfactual generation by LLMs, eliminating the need for fine-tuning while preserving the strengths of LLMs. Despite their simplicity, our methods outperform state-of-the-art counterfactual generation methods and are effective across different LLMs, highlighting the benefits of guiding counterfactual generation by LLMs with classifier information. We further show that data augmentation by our generated CFs can improve a classifier's robustness. Our analysis reveals a critical issue in counterfactual generation by LLMs: LLMs rely on parametric knowledge rather than faithfully following the classifier.
翻译:深度学习对可解释性的需求推动了反事实解释的研究兴趣,此类解释旨在识别改变模型预测所需的最小实例修改。当前的反事实生成方法需要针对特定任务进行微调,且生成的文本质量较低。大型语言模型虽能有效生成高质量文本,但在未经微调的情况下难以实现标签翻转的反事实(即改变预测结果的反事实)。我们引入了两种简单的分类器引导方法,以支持大型语言模型生成反事实,无需微调即可保留大型语言模型的优势。尽管方法简洁,我们的方法在性能上超越了当前最先进的反事实生成技术,并在不同大型语言模型中均表现有效,突显了利用分类器信息引导大型语言模型进行反事实生成的益处。我们进一步证明,通过生成的反事实进行数据增强可提升分类器的鲁棒性。分析揭示了一个大型语言模型反事实生成中的关键问题:大型语言模型依赖于参数化知识,而非忠实遵循分类器的决策逻辑。