Large Language Models (LLMs) are capable of generating persuasive Natural Language Explanations (NLEs) to justify their answers. However, the faithfulness of these explanations should not be readily trusted at face value. Recent studies have proposed various methods to measure the faithfulness of NLEs, typically by inserting perturbations at the explanation or feature level. We argue that these approaches are neither comprehensive nor correctly designed according to the established definition of faithfulness. Moreover, we highlight the risks of grounding faithfulness findings on out-of-distribution samples. In this work, we leverage a causal mediation technique called activation patching, to measure the faithfulness of an explanation towards supporting the explained answer. Our proposed metric, Causal Faithfulness quantifies the consistency of causal attributions between explanations and the corresponding model outputs as the indicator of faithfulness. We experimented across models varying from 2B to 27B parameters and found that models that underwent alignment tuning tend to produce more faithful and plausible explanations. We find that Causal Faithfulness is a promising improvement over existing faithfulness tests by taking into account the model's internal computations and avoiding out of distribution concerns that could otherwise undermine the validity of faithfulness assessments. We release the code in \url{https://github.com/wj210/Causal-Faithfulness}
翻译:大型语言模型(LLMs)能够生成具有说服力的自然语言解释(NLEs)来为其答案提供依据。然而,这些解释的可信性不应轻易从其表面价值予以采信。近期研究提出了多种衡量NLE可信性的方法,通常通过在解释层面或特征层面引入扰动来实现。我们认为,这些方法既不够全面,也未能依据可信性的既定定义进行正确设计。此外,我们指出基于分布外样本建立可信性结论的风险。本研究采用一种名为激活修补的因果中介技术,以衡量解释对支持所解释答案的可信性。我们提出的度量指标——因果可信性,通过量化解释与相应模型输出之间因果归因的一致性,作为可信性的指示器。我们在参数量从20亿到270亿不等的多个模型上进行了实验,发现经过对齐调优的模型倾向于生成更可信且更合理的解释。我们发现,因果可信性通过考虑模型的内部计算并规避可能损害可信性评估有效性的分布外问题,相较于现有可信性测试展现出显著的改进潜力。代码已发布于 \url{https://github.com/wj210/Causal-Faithfulness}。