Explanations are an important tool for gaining insights into the behavior of ML models, calibrating user trust and ensuring regulatory compliance. Past few years have seen a flurry of post-hoc methods for generating model explanations, many of which involve computing model gradients or solving specially designed optimization problems. However, owing to the remarkable reasoning abilities of Large Language Model (LLMs), self-explanation, that is, prompting the model to explain its outputs has recently emerged as a new paradigm. In this work, we study a specific type of self-explanations, self-generated counterfactual explanations (SCEs). We design tests for measuring the efficacy of LLMs in generating SCEs. Analysis over various LLM families, model sizes, temperature settings, and datasets reveals that LLMs sometimes struggle to generate SCEs. Even when they do, their prediction often does not agree with their own counterfactual reasoning.
翻译:解释是深入理解机器学习模型行为、校准用户信任度以及确保监管合规性的重要工具。近年来,涌现出大量用于生成模型解释的事后方法,其中许多涉及计算模型梯度或求解专门设计的优化问题。然而,得益于大语言模型卓越的推理能力,自我解释——即提示模型解释其自身输出——最近已成为一种新的范式。在本研究中,我们探讨一种特定类型的自我解释:自我生成的反事实解释。我们设计了用于衡量大语言模型生成反事实解释有效性的测试方法。通过对不同大语言模型家族、模型规模、温度设置及数据集的分析发现,大语言模型有时难以生成有效的反事实解释。即使能够生成,其预测结果也常与自身的反事实推理结论不一致。